Você está na página 1de 544

CENTIPEDE

Scientific classification

Kingdom: Animalia

Phylum: Arthropoda

Subphylum: Myriapoda

Chilopoda
Class:
Latreille, 1817

Orders and Families

Centipedes (from Latin prefix centi-, "hundred", and pes, pedis, "foot") are arthropods belonging to the
class Chilopoda of the subphylum Myriapoda. They are elongated metameric animals with one pair of
legs per body segment. Despite the name, centipedes can have a varying number of legs from under 20 to
over 300. Centipedes have an odd number of pairs of legs, e.g. 15 or 17 pairs of legs (30 or 34 legs) but
never 16 pairs (32 legs).[1][2] A key trait uniting this group is a pair of venom claws or "forcipules" formed
from a modified first appendage. Centipedes are a predominantly carnivorous taxon.[3]:168

Centipedes normally have a drab coloration combining shades of brown and red. Cavernicolous (cave-
dwelling) and subterranean species may lack pigmentation and many tropical scolopendromorphs have
bright aposematic colours. Size can range from a few millimetres in the smaller lithobiomorphs and
geophilomorphs to about 30 cm (12 in) in the largest scolopendromorphs. Centipedes can be found in a
wide variety of environments.

Worldwide there are estimated to be 8,000 species of centipede,[4] of which 3,000 have been described.
Centipedes have a wide geographical range, reaching beyond the Arctic Circle.[3] Centipedes are found in
an array of terrestrial habitats from tropical rainforests to deserts. Within these habitats centipedes require
a moist micro-habitat because they lack the waxy cuticle of insects and arachnids, and so lose water
rapidly through the skin.[5] Accordingly, they are found in soil and leaf litter, under stones and dead wood,
and inside logs. Centipedes are among the largest terrestrial invertebrate predators and often contribute
significantly to the invertebrate predatory biomass in terrestrial ecosystems.

[edit] Description

Centipedes have a rounded or flattened head, bearing a pair of antennae at the forward margin. They have
a pair of elongated mandibles, and two pairs of maxillae. The first pair of maxillae form the lower lip, and
bear short palps. The first pair of limbs stretch forward from the body to cover the remainder of the

1
mouth. These limbs, or maxillipeds, end in sharp claws and include venom glands that help the animal to
kill or paralyse its prey.[5]

Centipedes possess a variable number of ocelli, which are sometimes clustered together to form true
compound eyes. Even so, it appears that centipedes are only capable of discerning light and dark, and not
of true vision. Indeed, many species lack eyes altogether. In some species the final pair of legs act as
sense organs similar to antennae, but facing backwards. An unusual sense organ found in some groups are
the organs of Tömösvary. These are located at the base of the antennae, and consist of a disc-like structure
with a central pore surrounded by sensory cells. They are probably used for sensing vibrations, and may
even provide a sense of hearing.[5]

Underside of Scolopendra cingulata, showing the forcipules

Forcipules are a unique feature found only in centipedes and in no other arthropods. The forcipules are
modifications of the first pair of legs, forming a pincer-like appendage always found just behind the
head.[6] Forcipules are not true mouthparts, although they are used in the capture of prey items, injecting
venom and holding onto captured prey. Venom glands run through a tube almost to the tip of each
forcipule.[6]

Behind the head, the body consists of fifteen or more segments. Most of the segments bear a single pair of
legs, with the maxillipeds projecting forward from the first body segment, and the final two segments
being small and legless. Each pair of legs is slightly longer than the pair immediately in front of it,
ensuring that they do not overlap, and therefore reducing the chance that they will collide with each other
while moving swiftly. In extreme cases, the last pair of legs may be twice the length of the first pair. The
final segment bears a telson and includes the openings of the reproductive organs.[5]

Centipedes are predators, and mainly use their antennae to seek out their prey. The digestive tract forms a
simple tube, with digestive glands attached to the mouthparts. Like insects, centipedes breathe through a
tracheal system, typically with a single opening, or spiracle on each body segment. They excrete waste
through a single pair of malpighian tubules.[5]

Scolopendra gigantea, also known as the Amazonian giant centipede, is the largest existing species of
centipede in the world, reaching over 30 cm (12 in) in length. It is known to eat lizards, frogs, birds, mice,
and even bats, catching them in midflight,[7] as well as rodents and spiders. The now extinct Euphoberia
was the largest centipede, growing up to 1 m (39 in) in length.

2
[edit] Life cycle

A centipede protecting her eggmass

Centipede reproduction does not involve copulation. Males deposit a spermatophore for the female to take
up. In one clade, this spermatophore is deposited in a web, and the male undertakes a courtship dance to
encourage the female to engulf his sperm. In other cases, the males just leave them for the females to find.
In temperate areas egg laying occurs in spring and summer but in subtropical and tropical areas there
appears to be little seasonality to centipede breeding. It is also notable that there are a few known species
of parthenogenetic centipedes.[3]

The Lithobiomorpha, and Scutigeromorpha lay their eggs singly in holes in the soil, the female fills the
hole in on the egg and leaves it. The number of eggs laid ranges from about 10 to 50. Time of
development of the embryo to hatching is highly variable and may take from one to a few months. Time
of development to reproductive period is highly variable within and among species. For example, it can
take 3 years for S. coleoptera to achieve adulthood, whereas under the right conditions Lithiobiomorph
species may reach a reproductive period in 1 year. In addition, centipedes are relatively long-lived when
compared to their insect cousins. For example: the European Lithobius forficatus can live for 5 or 6 years.
The combination of a small number of eggs laid, long gestation period, and long time of development to
reproduction has led authors to label Lithobiomorph centipedes as K-selected.[8]

Females of Geophilomorpha and Scolopendromorpha show far more parental care, the eggs 15 to 60 in
number are laid in a nest in the soil or in rotten wood, the female stays with the eggs, guarding and licking
them to protect them from fungi. The female in some species stays with the young after they have
hatched, guarding them until they are ready to leave. If disturbed the females tend to either abandon the
eggs of their young or eat them; abandoned eggs tend to fall prey to fungi rapidly. Some species of
Scolopendromorpha are matriphagic, meaning that the offspring eat their mother.

Little is known of the life history of Craterostigmomorpha.

[edit] Anamorphy vs. epimorphy

Centipedes grow their legs at different points in their development. In the primitive condition, exhibited
by the L, Scutigeromorpha and Craterostigmomorpha, development is anamorphic. That is to say, more
pairs of legs are grown between moults; for example, Scutigera coleoptrata, the American house
centipede, hatches with only 4 pairs of legs and in successive moults has 5, 7, 9, 11, 15, 15, 15 and 15
before becoming a sexually mature adult. Life stages with fewer than 15 pairs of legs are called larval
stadia (~5 stages). After the full complement of legs is achieved, the now post-larval stadia (~5 stages)
develop gonopods, sensory pores, more antennal segments, and more ocelli. All mature Lithobiomorph
centipedes have 15 leg-bearing segments.[3]:27

3
The Craterostigmomorpha only have one phase of anamorphis, with embryos having 12 pairs, and
moultees 15.

The clade Epimorpha, consisting of orders Geophilomorpha and Scolopendromorpha, derived epimorphy.
Here, all pairs of legs are developed in the embryonic stages, offspring do not develop more legs between
moults. It is this clade that contains the longest centipedes; the maximum number of thoracic segments
may also vary intra-specifically, often on a geographical basis; in most cases, females bear more legs than
males. The number of leg-bearing segments varies widely, from 15 to 191, but the developmental mode
of their creation means that they are always added in pairs – hence the total number of pairs is always
odd.

[edit] Ecology

A centipede being eaten by a European roller

Centipedes are a predominantly predatory taxon. They are known as generalist predators which means
that they have adapted to eat a variety of different available prey items. Examination of centipede gut
contents suggest that plant material is an unimportant part of their diet although centipedes have been
observed to eat vegetable matter when starved during laboratory experiments.[3]:168

Centipedes are also known to be nocturnal. Studies on centipede activity rhythms confirm this, although
there are a few observations of centipedes active during the day and one species Strigamia chinophila that
is diurnal. What centipedes actually eat is not well known because of their cryptic lifestyle and thorough
mastication of food. Laboratory feeding trials support that they will feed as generalists, taking most
anything that is soft-bodied and in a reasonable size range. It has been suggested that earthworms provide
the bulk of diets for Geophilomorphs, since Geophilomorphs burrow through the soil and earthworm
bodies would be easily pierced by their poison claws. Observations suggest that Geophilomorphs cannot
subdue earthworms larger than themselves, and so smaller earthworms may be a substantial proportion of
their diet.[9] Scolopendromorphs, given their size, are able to feed on vertebrates as well as invertebrates.
They have been observed eating reptiles, amphibians, small mammals, bats and birds. Collembola may
provide a large proportion of Lithiobiomorph diet. Little is known about Scutigeromorph or
Craterostigmomorph diets. All centipedes are potential intraguild predators. Centipedes and spiders may
frequently prey on one another.[3]

Centipedes are eaten by a great many vertebrates and invertebrates, such as mongooses, mice,
salamanders, beetles and snakes.[3]:354-356 They form an important item of diet for many species and the
staple diet of some such as the African ant Amblyopone pluto, which feeds solely on geophilomorph
centipedes,[10] and the South African Cape black-headed snake Aparallactus capensis.[3]:354-356

4
Centipedes are found in moist microhabitats. Water relations are an important aspect of their ecology,
since they lose water rapidly in dry conditions. Water loss is a result of centipedes lacking a waxy
covering of their exoskeleton and excreting waste nitrogen as ammonia, which requires extra water.
Centipedes deal with water loss through a variety of adaptations. Geophilomorphs lose water less rapidly
than Lithobiomorphs even though they have a greater surface area to volume ratio. This may be explained
by the fact that Geophilomorphs have a more heavily sclerotized pleural membrane. Spiracle shape, size
and ability to constrict also have an influence on rate of water loss. In addition, it has been suggested that
number and size of coxal pores may be variables affecting centipede water balance.

Centipedes live in many different habitat types; forest, savannah, prairie, and desert to name a few. Some
Geophilomorphs are adapted to littoral habitats, where they feed on barnacles.[11] Species of all orders
excluding Craterostigmomorpha have adapted to caves. Centipede densities have been recorded as high as
600/m2 and biomass as high as 500 mg/m2 wet weight. Small geophilomorphs attain highest densities,
followed by small Lithobiomorphs. Large Lithobiomorphs attain densities of 20/m2. One study of
scolopendromorphs records Scolopendra morsitans in a Nigerian savannah at a density of 0.16/m2 and a
biomass of 140 mg/m2 wet weight.[12]

[edit] Hazards to humans

Some species of centipede can be hazardous to humans because of their bite. Although a bite to an adult
human is usually very painful and may cause severe swelling, chills, fever, and weakness, it is unlikely to
be fatal. Bites can be dangerous to small children and those with allergies to bee stings. The bite of larger
centipedes can induce anaphylactic shock in such people. Smaller centipedes usually do not puncture
human skin.[13]

[edit] Evolution

Internal phylogeny of the Chilopoda


Scutigeromorpha
Pleurostigmorpha Lithobiomorpha
Phylactometria Craterostigmomorpha
Epimorpha Scolopendromorpha
Geophilomorpha

The upper three groups form the paraphyletic Anamorpha.

The fossil record of centipedes extends back to 430 million years ago, during the Late Silurian.[14] They
belong to the subphylum Myriapoda which includes Diplopoda, Symphyla, and Pauropoda. The oldest
known fossil land animal, Pneumodesmus newmani, is a myriapod. Being among the earliest terrestrial
animals, centipedes were one of the first to fill a fundamental niche as ground level generalist predators in
detrital food webs. Today, centipedes are abundant and exist in many harsh habitats.

Within the myriapods, centipedes are believed to be the first of the extant classes to branch from the last
common ancestor. There are five orders of centipedes: Craterostigmomorpha, Geophilomorpha,
Lithobiomorpha, Scolopendromorpha, and Scutigeromorpha. These orders are united into the clade
Chilopoda by the following synapomorphies:[15]

1. The first post-cephalic appendage is modified to poison claws.


2. The embryonic cuticle on second maxilliped has an egg tooth.

5
3. The trochanter–prefemur joint is fixed.
4. There is a spiral ridge on the nucleus of the spermatozoon.

Chilopoda is then split into two clades: the Notostigmomorpha including the Scutigeromorpha and the
Pluerostigmomorpha including the other four orders. The main difference is that the Notostigmomorpha
have their spiracles located mid-dorsally. It was previously believed that Chilopoda was split into
Anamorpha (Lithobiomorpha and Scutigeromorpha) and Epimorpha (Geophilomorpha and
Scolopendromorpha), based on developmental modes, with the relationship of Craterostigmomorpha
being uncertain. Recent phylogenetic analyses using combined molecular and morphological characters
supports the previous phylogeny.[15] Epimorpha still exists as a monophyletic group within the
Pleurostigmomorpha, but Anamorpha is paraphyletic.

Geophilomorph centipedes have bene used to argue for the developmental constraint of evolution: that the
evolvability of a trait, the number of segments in the case of geophilomorph centipedes, was constrained
by the mode of development. The geophilomorph centipedes have variable segment numbers within
species, yet as with all centipedes they always have an odd number of pairs of legs. In this taxon, the
number of segments range from 27 to 191 but is never an even number.[16]

[edit] Orders and families

Representatives of centipede orders

Scutigera coleoptrata
(Scutigeromorpha: Scutigeridae)

Lithobius forficatus
(Lithobiomorpha: Lithobiidae)

6
Geophilus flavus
(Geophilomorpha: Geophilidae)

[edit] Scutigeromorpha

The Scutigeromorpha are anamorphic, reaching 15 leg-bearing segments in length. They are very fast
creatures, and able to withstand falling at great speed: they reach up to 15 body lengths per second when
dropped, surviving the fall. They are the only centipede group to retain their original compound eyes,
with which a crystalline layer analogous to that seen in chelicerates and insects can be observed. They
also bear long and multi-segmented antennae. Adaptation to a burrowing lifestyle has led to the
degeneration of compound eyes in other orders. This feature is of great use in phylogenetic analysis. The
group is the sole extant representative of the Notostigmomorpha, defined by having a single spiracle
opening at the posterior of each dorsal plate. The more derived groups bear a plurality of spiracular
openings on their sides, and are termed the Pleurostigmomorpha. Some even have several unpaired
spiracles that can be found along the mid-dorsal line and closer to their posterior section of tergites. There
are three families: Psellioididae, Scutigeridae and Scutigerinidae.

[edit] Lithobiomorpha

The Lithobiomorpha represent the other main group of anamorphic centipedes; they also reach a mature
length of 15 thoracic segments. This group has lost the compound eyes, and sometimes has no eyes
altogether. Instead, its eyes have facets or groups of facets. Its spiracles are paired and can be found
laterally. Every leg-bearing segment of this organism has a separate tergite. It also has relatively short
antennae and legs. Two families are included, Henicopidae and Lithobiidae.

[edit] Craterostigmomorpha

The Craterostigmomorpha are the least diverse centipede clade, comprising only two extant species,
both in the genus Ceratostigmus.[17] Their geographic range is restricted to Tasmania and New Zealand.
They have a distinct body plan; their anamorphosis comprises a single stage; they grow from 12 to 15
segments in their first moult. Their low diversity and intermediate position between the primitive
Anamorphic centipedes and the derived Epimorpha has led to them being likened to the platypus.[17] They
represent the survivors of a once diverse clade. Maternal brooding unites Craterostigomomorpha with the
Epimorpha into the clade Phylactometria. This trait is thought to be closely linked with the presence of
sternal pores, which secrete sticky or noxious secretions, which mainly serve to repel predators and
parasites. The presence of these pores on the Devonian Devonobius permits its inclusion in this clade,
allowing its divergence to be dated to 375 (or more) million years ago.[18]

[edit] Scolopendromorpha

7
The Scolopendromorpha comprise 21 or more segments with the same number of paired legs. Their
antennae have 17 or more segments. Their eyes have at least 4 facets on each side. The order comprises
the three families Cryptopidae, Scolopendridae and Scolopocryptopidae.

[edit] Geophilomorpha

The Geophilomorpha bear upwards of 27 leg-bearing segments. They are eyeless and blind, and bear
spiracles on all leg-bearing segments – in contrast to other groups, who only bear them on their 3rd, 5th,
8th, 10th and 12th segments – a "mid-body break", accompanied by a change in tagmatic shape, occurring
roughly at the interchange from odd to even segments. This group, at 1260 spp. the most diverse, also
contains the largest and leggiest specimens at 29 or more pairs of legs. They also have 14–segmented
antennae. The group includes four families: Mecistocephalidae, Neogeophilidade, Geophilidae and
Linotaeniidae.

MILLIPEDE
From Wikipedia, the free encyclopedia
Jump to: navigation, search
For other uses, see Millipede (disambiguation).

Scientific classification

Kingdom: Animalia

Phylum: Arthropoda

Subphylum: Myriapoda

Diplopoda
Class: De Blainville in
Gervais, 1844 [1]

Subclasses, orders and


families

Millipedes are arthropods that have two pairs of legs per segment (except for the first segment behind the
head which does not have any appendages at all, and the next few which only have one pair of legs). Each
segment that has two pairs of legs is a result of two single segments fused together as one. Most
millipedes have very elongated cylindrical bodies, although some are flattened dorso-ventrally, while pill
millipedes are shorter and can roll into a ball, like a pillbug.

The name "millipede" is a compound word formed from the Latin roots mille ("thousand") and pes
("foot"). Despite their name, millipedes do not have 1,000 legs, although the rare species Illacme plenipes
has up to 750.[2] Common species have between 36 and 400 legs. The class contains around 10,000

8
species in 13 orders and 115 families. The giant African millipede (Archispirostreptus gigas), known as
shongololos, is the largest species of millipede.

Millipedes are detritivores and slow moving. Most millipedes eat decaying leaves and other dead plant
matter, moisturising the food with secretions and then scraping it in with its jaws. However, they can also
be a minor garden pest, especially in greenhouses where they can cause severe damage to emergent
seedlings. Signs of millipede damage include the stripping of the outer layers of a young plant stem and
irregular damage to leaves and plant apices.

Millipedes can be easily distinguished from the somewhat similar and related centipedes (Class
Chilopoda), which move rapidly, and have a single pair of legs for each body segment.

[edit] Evolution

This class of arthropod is thought to be among the first animals to have colonised land during the Silurian
geologic period. These early forms probably ate mosses and primitive vascular plants. The oldest known
land creature, Pneumodesmus newmani, was a 1 centimetre (0.39 in) long millipede, and lived 428
million years ago.[3] In the Upper Carboniferous (340 to 280 million years ago), Arthropleura became the
largest known land invertebrate of all time, reaching lengths of up to 2.6 metres (8 ft 6 in).

[edit] Characteristics

The North American millipede Narceus americanus — head with eyes

Millipedes range from 2 to 280 millimetres (0.079 to 11 in) in length, and can have as few as eleven, to
over a hundred segments. They are generally black or brown in colour, although there are few brightly
coloured species.

The millipede's most obvious feature is its large number of legs. Having very many short legs makes
millipedes rather slow, but they are powerful burrowers. With their legs and body length moving in a
wavelike pattern, they easily force their way underground head first. They also seem to have some
engineering ability, reinforcing the tunnel by rearranging the particles around it. Their bodies have
segmented sections which makes them move in a wave-like form.

The head of a millipede is typically rounded above and flattened below and bears large mandibles. The
body is flattened or cylindrical, with a single chitinous plate above, one at each side, and two or three on

9
the underside. In many millipedes, these plates are fused to varying degrees, sometimes forming a single
cylindrical ring. The plates are typically hard, being impregnated with calcium salts.[4]

Unlike centipedes and other similar animals, each segment bears two pairs of legs, rather than just one.
This is because each is actually formed by the fusion of two embryonic segments, and is therefore
properly referred to as a "diplosegment", or double segment. The first few segments behind the head are
not fused in this fashion, and the first segment is legless,called a collum segment while the second to
fourth have one pair each. In some millipedes, the last few segments may also be legless. The final
segment bears a telson.[4]

Millipedes breathe through two pairs of spiracles on each diplosegment. Each opens into an internal
pouch, and connects to a system of tracheae. The heart runs the entire length of the body, with an aorta
stretching into the head. The excretory organs are two pairs of malpighian tubules, located near the mid-
part of the gut.[4]

The head contains a pair of sensory organs known as the Tömösváry organs. These are found just
posterior and lateral to the antennae, and are shaped as small and oval rings at the base of the antennae.
They are probably used to measure the humidity in the surroundings, and they may have some
chemoreceptory abilities too. Millipede eyes consist of a number of simple flat lensed ocelli arranged in a
group on the front/side of the head. Many species of millipedes, such as cave-dwelling millipedes, have
secondarily lost their eyes.

According to Guinness World Records the African giant black millipede Archispirostreptus gigas can
grow to 38.6 centimetres (15.2 in).[5]

[edit] Diet

Most millipedes are herbivorous, and feed on decomposing vegetation or organic matter mixed with soil.
A few species are omnivorous or carnivorous, and may prey on small arthropods, such as insects and
centipedes, or on earthworms. Some species have piercing mouth parts that allow them to feed on plant
juices.

The digestive tract is a simple tube with two pairs of salivary glands to help digest the food. Many
millipedes moisten their food with saliva before eating it.[4]

[edit] Reproduction

Epibolus pulchripes mating

10
Male millipedes can be differentiated from female millipedes by the presence of one or two pairs of legs
modified into gonopods. These modified legs, which are usually on the seventh segment, are used to
transfer sperm packets to the female during copulation.[6] A few species are parthenogenetic, having few,
if any, males.

The genital openings are located on the third segment, and are accompanied in the male by one or two
penises, which deposit the sperm packets onto the gonopods. In the female, the genital pores open into a
small chamber, or vulva, which is covered by a small hood-like cover, and is used to store the sperm after
copulation.[4]

Females lay between ten and three hundred eggs at a time, depending on species, fertilising them with the
stored sperm as they do so. Many species simply deposit the eggs on moist soil or organic detritus, but
some construct nests lined with dried faeces.

The young hatch after a few weeks, and typically have only three pairs of legs, followed by up to four
legless segments. As they grow, they continually moult, adding further segments and legs as they do so.
Some species moult within specially prepared chambers, which they may also use to wait out dry weather,
and most species eat the shed exoskeleton after moulting. Millipedes live from one to ten years,
depending on species.[4]

[edit] Defense mechanisms

Due to their lack of speed and their inability to bite or sting, millipedes' primary defense mechanism is to
curl into a tight coil — protecting their delicate legs inside an armored body exterior. Many species also
emit poisonous liquid secretions or hydrogen cyanide gas through microscopic pores called odoriferous
glands along the sides of their bodies as a secondary defense.[7][8][9] Some of these substances are caustic
and can burn the exoskeleton of ants and other insect predators, and the skin and eyes of larger predators.
Animals such as Capuchin monkeys have been observed intentionally irritating millipedes in order to rub
the chemicals on themselves to repel mosquitoes.[10] At least one species, Polyxenus fasciculatus employs
detachable bristles to entangle ants.[11]

As far as humans are concerned, this chemical brew is fairly harmless, usually causing only minor effects
on the skin, the main effect being discoloration, but other effects may also include pain, itching, local
erythema, edema, blisters, eczema, and occasionally cracked skin.[8][12][13][14] Eye exposures to these
secretions causes general eye irritation and potentially more severe effects such as conjunctivitis and
keratitis.[15] First aid consists of flushing the area thoroughly with water; further treatment is aimed at
relieving the local effects.

BUTTERFLY
From Wikipedia, the free encyclopedia

Jump to: navigation, search

For other uses, see Butterfly (disambiguation).

11
Scientific classification

Kingdom: Animalia

Phylum: Arthropoda

Class: Insecta

Order: Lepidoptera

(unranked): Rhopalocera

A butterfly is a mainly day-flying insect of the order Lepidoptera, the butterflies and moths. Like other
holometabolous insects, the butterfly's life cycle consists of four parts, egg, larva, pupa and adult. Most
species are diurnal. Butterflies have large, often brightly coloured wings, and conspicuous, fluttering
flight. Butterflies comprise the true butterflies (superfamily Papilionoidea), the skippers (superfamily
Hesperioidea) and the moth-butterflies (superfamily Hedyloidea). All the many other families within the
Lepidoptera are referred to as moths.

Butterflies exhibit polymorphism, mimicry and aposematism. Some, like the Monarch, will migrate over
long distances. Some butterflies have evolved symbiotic and parasitic relationships with social insects
such as ants. Some species are pests because in their larval stages they can damage domestic crops or
trees; however, some species are agents of pollination of some plants, and caterpillars of a few butterflies
(e.g., Harvesters) eat harmful insects. Culturally, butterflies are a popular motif in the visual and literary
arts.

Life cycle

12
Mating Common Buckeye Butterflies

It is a popular belief that butterflies have very short life spans. However, butterflies in their adult stage
can live from a week to nearly a year depending on the species. Many species have long larval life stages
while others can remain dormant in their pupal or egg stages and thereby survive winters.[1]

Butterflies may have one or more broods per year. The number of generations per year varies from
temperate to tropical regions with tropical regions showing a trend towards multivoltinism.

Egg

Egg of Ariadne merione

Butterfly eggs are protected by a hard-ridged outer layer of shell, called the chorion. This is lined with a
thin coating of wax which prevents the egg from drying out before the larva has had time to fully develop.
Each egg contains a number of tiny funnel-shaped openings at one end, called micropyles; the purpose of
these holes is to allow sperm to enter and fertilize the egg. Butterfly and moth eggs vary greatly in size
between species, but they are all either spherical or ovate.

Butterfly eggs are fixed to a leaf with a special glue which hardens rapidly. As it hardens it contracts,
deforming the shape of the egg. This glue is easily seen surrounding the base of every egg forming a
meniscus. The nature of the glue is unknown and is a suitable subject for research. The same glue is
produced by a pupa to secure the setae of the cremaster. This glue is so hard that the silk pad, to which the
setae are glued, cannot be separated.

13
Eggs are usually laid on plants. Each species of butterfly has its own hostplant range and while some
species of butterfly are restricted to just one species of plant, others use a range of plant species, often
including members of a common family.

The egg stage lasts a few weeks in most butterflies but eggs laid close to winter, especially in temperate
regions, go through a diapause (resting) stage, and the hatching may take place only in spring. Other
butterflies may lay their eggs in the spring and have them hatch in the summer. These butterflies are
usually northern species, such as the Mourning Cloak (Camberwell Beauty) and the Large and Small
Tortoiseshell butterflies.

Caterpillars

Caterpillars of Junonia coenia.

Butterfly larvae, or caterpillars, consume plant leaves and spend practically all of their time in search of
food. Although most caterpillars are herbivorous, a few species such as Spalgis epius and Liphyra
brassolis are entomophagous (insect eating).

Some larvae, especially those of the Lycaenidae, form mutual associations with ants. They communicate
with the ants using vibrations that are transmitted through the substrate as well as using chemical
signals.[2][3] The ants provide some degree of protection to these larvae and they in turn gather honeydew
secretions.

Caterpillars mature through a series of stages called instars. Near the end of each instar, the larva
undergoes a process called apolysis, in which the cuticle, a tough outer layer made of a mixture of chitin
and specialized proteins, is released from the softer epidermis beneath, and the epidermis begins to form a
new cuticle beneath. At the end of each instar, the larva moults the old cuticle, and the new cuticle
expands, before rapidly hardening and developing pigment. Development of butterfly wing patterns
begins by the last larval instar.

Butterfly caterpillars have three pairs of true legs from the thoracic segments and up to 6 pairs of prolegs
arising from the abdominal segments. These prolegs have rings of tiny hooks called crochets that help
them grip the substrate.

14
Some caterpillars have the ability to inflate parts of their head to appear snake-like. Many have false eye-
spots to enhance this effect. Some caterpillars have special structures called osmeteria which are everted
to produce smelly chemicals. These are used in defense.

Host plants often have toxic substances in them and caterpillars are able to sequester these substances and
retain them into the adult stage. This helps making them unpalatable to birds and other predators. Such
unpalatibility is advertised using bright red, orange, black or white warning colours. The toxic chemicals
in plants are often evolved specifically to prevent them from being eaten by insects. Insects in turn
develop countermeasures or make use of these toxins for their own survival. This "arms race" has led to
the coevolution of insects and their host plants.[4]

Wing development

Last instar wing disk, Junonia coenia

Detail of a butterfly wing

Wings or wing pads are not visible on the outside of the larva, but when larvae are dissected, tiny
developing wing disks can be found on the second and third thoracic segments, in place of the spiracles
that are apparent on abdominal segments. Wing disks develop in association with a trachea that runs
along the base of the wing, and are surrounded by a thin peripodial membrane, which is linked to the
outer epidermis of the larva by a tiny duct.

Wing disks are very small until the last larval instar, when they increase dramatically in size, are invaded
by branching tracheae from the wing base that precede the formation of the wing veins, and begin to
develop patterns associated with several landmarks of the wing.

15
Near pupation, the wings are forced outside the epidermis under pressure from the hemolymph, and
although they are initially quite flexible and fragile, by the time the pupa breaks free of the larval cuticle
they have adhered tightly to the outer cuticle of the pupa (in obtect pupae). Within hours, the wings form
a cuticle so hard and well-joined to the body that pupae can be picked up and handled without damage to
the wings.

Pupa

Chrysalis of Gulf Fritillary

When the larva is fully grown, hormones such as prothoracicotropic hormone (PTTH) are produced. At
this point the larva stops feeding and begins "wandering" in the quest of a suitable pupation site, often the
underside of a leaf.

The larva transforms into a pupa (or chrysalis) by anchoring itself to a substrate and moulting for the last
time. The chrysalis is usually incapable of movement, although some species can rapidly move the
abdominal segments or produce sounds to scare potential predators.

The pupal transformation into a butterfly through metamorphosis has held great appeal to mankind. To
transform from the miniature wings visible on the outside of the pupa into large structures usable for
flight, the pupal wings undergo rapid mitosis and absorb a great deal of nutrients. If one wing is surgically
removed early on, the other three will grow to a larger size. In the pupa, the wing forms a structure that
becomes compressed from top to bottom and pleated from proximal to distal ends as it grows, so that it
can rapidly be unfolded to its full adult size. Several boundaries seen in the adult color pattern are marked
by changes in the expression of particular transcription factors in the early pupa.

Adult or imago

The adult, sexually mature, stage of the insect is known as the imago. As Lepidoptera, butterflies have
four wings that are covered with tiny scales (see photo). The fore and hindwings are not hooked together,
permitting a more graceful flight. An adult butterfly has six legs, but in the nymphalids, the first pair is

16
reduced. After it emerges from its pupal stage, a butterfly cannot fly until the wings are unfolded. A
newly emerged butterfly needs to spend some time inflating its wings with blood and letting them dry,
during which time it is extremely vulnerable to predators. Some butterflies' wings may take up to three
hours to dry while others take about one hour. Most butterflies and moths will excrete excess dye after
hatching. This fluid may be white, red, orange, or in rare cases, blue.

External morphology

Main article: Glossary of Lepidopteran terms

Parts of an adult butterfly


Butterflies have two antennae, two compound eyes, and a proboscis

Adult butterflies have four wings: a forewing and hindwing on both the left and the right side of the body.
The body is divided into three segments: the head, thorax, and the abdomen. They have two antennae, two
compound eyes, and a proboscis.

Scales

Butterflies are characterized by their scale-covered wings. The coloration of butterfly wings is created by
minute scales. These scales are pigmented with melanins that give them blacks and browns, but blues,
greens, reds and iridescence are usually created not by pigments but the microstructure of the scales. This
structural coloration is the result of coherent scattering of light by the photonic crystal nature of the
scales.[5][6][7] The scales cling somewhat loosely to the wing and come off easily without harming the
butterfly.

Photogra
phic and
light
microsco
pic
images

Zoomed-out view Closeup of the High


of an Inachis io. scales of the same magnification of
specimen. the coloured
scales (probably a
different species).

17
Electron
microsco
pic
images

Microstructure of
A patch of wing Scales close up A single scale
a scale

Magnific
ation Approx. ×50 Approx. ×200 ×1000 ×5000

Polymorphism

Many adult butterflies exhibit polymorphism, showing differences in appearance. These variations
include geographic variants and seasonal forms. In addition many species have females in multiple forms,
often with mimetic forms. Sexual dimorphism in coloration and appearance is widespread in butterflies.
In addition many species show sexual dimorphism in the patterns of ultraviolet reflectivity, while
otherwise appearing identical to the unaided human eye. Most of the butterflies have a sex-determination
system that is represented as ZW with females being the heterogametic sex (ZW) and males homogametic
(ZZ).[8]

Genetic abnormalities such as gynandromorphy also occur from time to time. In addition many butterflies
are infected by Wolbachia and infection by the bacteria can lead to the conversion of males into females[9]
or the selective killing of males in the egg stage.[10]

Mimicry

The Heliconius butterflies from the tropics of the Western Hemisphere are the classical model for
Müllerian mimicry.[11]

18
Batesian and Mullerian mimicry in butterflies is common. Batesian mimics imitate other species to enjoy
the protection of an attribute they do not share, aposematism in this case. The Common Mormon of India
has female morphs which imitate the unpalatable red-bodied swallowtails, the Common Rose and the
Crimson Rose. Mullerian mimicry occurs when aposematic species evolve to resemble each other,
presumably to reduce predator sampling rates, the Heliconius butterflies from the Americas being a good
example.

Wing markings called eyespots are present in some species; these may have an automimicry role for some
species. In others, the function may be intraspecies communication, such as mate attraction. In several
cases, however, the function of butterfly eyespots is not clear, and may be an evolutionary anomaly
related to the relative elasticity of the genes that encode the spots.[12][13]

Seasonal polyphenism

Many of the tropical butterflies have distinctive seasonal forms. This phenomenon is termed seasonal
polyphenism and the seasonal forms of the butterflies are called the dry-season and wet-season forms.
How the season affects the genetic expression of patterns is still a subject of research. [14] Experimental
modification by ecdysone hormone treatment has demonstrated that it is possible to control the continuum
of expression of variation between the wet and dry-season forms.[15] The dry-season forms are usually
more cryptic and it has been suggested that the protection offered may be an adaptation. Some also show
greater dark colours in the wet-season form which may have thermoregulatory advantages by increasing
ability to absorb solar radiation.[16]

Bicyclus anynana is a species of butterfly that exhibits a clear example of seasonal polyphenism. These
butterflies, endemic to Africa, have two distinct phenotypic forms that alternate according to the season.
The wet-season forms have large, very apparent ventral eyespots whereas the dry-season forms have very
reduced, oftentimes nonexistent, ventral eyespots. Larvae that develop in hot, wet conditions develop into
wet-season adults where as those growing in the transition from the wet to the dry season, when the
temperature is declining, develop into dry-season adults.[17] This polyphenism has an adaptive role in B.
anynana. In the dry-season it is disadvantageous to have conspicuous eyespots because B. anynana blend
in with the brown vegetation better without eyespots. By not developing eyespots in the dry-season they
can more easily camouflage themselves in the brown brush. This minimizes the risk of visually mediated
predation. In the wet-season, these brown butterflies cannot as easily rely on cryptic coloration for
protection because the background vegetation is green. Thus, eyespots, which may function to decrease
predation, are beneficial for B. anynana to express.[18]

Habits

Antennal shape in the Lepidoptera from C. T. Bingham (1905)

19
The Australian painted lady feeding on a flowering shrub

Butterflies feed primarily on nectar from flowers. Some also derive nourishment from pollen,[19] tree sap,
rotting fruit, dung, decaying flesh, and dissolved minerals in wet sand or dirt. Butterflies are important as
pollinators for some species of plants although in general they do not carry as much pollen load as bees.
They are however capable of moving pollen over greater distances.[20] Flower constancy has been
observed for at least one species of butterfly.[21]

As adults, butterflies consume only liquids and these are sucked by means of their proboscis. They feed
on nectar from flowers and also sip water from damp patches. This they do for water, for energy from
sugars in nectar and for sodium and other minerals which are vital for their reproduction. Several species
of butterflies need more sodium than provided by nectar. They are attracted to sodium in salt and they
sometimes land on people, attracted by human sweat. Besides damp patches, some butterflies also visit
dung, rotting fruit or carcasses to obtain minerals and nutrients. In many species, this mud-puddling
behaviour is restricted to the males, and studies have suggested that the nutrients collected are provided as
a nuptial gift along with the spermatophore during mating.[22]

Butterflies sense the air for scents, wind and nectar using their antennae. The antennae come in various
shapes and colours. The hesperids have a pointed angle or hook to the antennae, while most other families
show knobbed antennae. The antennae are richly covered with sensillae. A butterfly's sense of taste is
coordinated by chemoreceptors on the tarsi, or feet, which work only on contact, and are used to
determine whether an egg-laying insect's offspring will be able to feed on a leaf before eggs are laid on
it.[23] Many butterflies use chemical signals, pheromones, and specialized scent scales (androconia) and
other structures (coremata or 'Hair pencils' in the Danaidae) are developed in some species.

Vision is well developed in butterflies and most species are sensitive to the ultraviolet spectrum. Many
species show sexual dimorphism in the patterns of UV reflective patches.[24] Color vision may be
widespread but has been demonstrated in only a few species.[25][26]

Some butterflies have organs of hearing and some species are also known to make stridulatory and
clicking sounds.[27]

20
Monarch butterflies

Many butterflies, such as the Monarch butterfly, are migratory and capable of long distance flights. They
migrate during the day and use the sun to orient themselves. They also perceive polarized light and use it
for orientation when the sun is hidden.[28]

Many species of butterfly maintain territories and actively chase other species or individuals that may
stray into them. Some species will bask or perch on chosen perches. The flight styles of butterflies are
often characteristic and some species have courtship flight displays. Basking is an activity which is more
common in the cooler hours of the morning. Many species will orient themselves to gather heat from the
sun. Some species have evolved dark wingbases to help in gathering more heat and this is especially
evident in alpine forms.[29]

Flight

Geitoneura klugii taking off

See also Insect flight

Like many other members of the insect world, the lift generated by butterflies is more than what can be
accounted for by steady-state, non-transitory aerodynamics. Studies using Vanessa atalanta in a
windtunnel show that they use a wide variety of aerodynamic mechanisms to generate force. These
include wake capture, vortices at the wing edge, rotational mechanisms and Weis-Fogh 'clap-and-fling'
mechanisms. The butterflies were also able to change from one mode to another rapidly.[30]

Migration

21
The Monarch butterfly migrates large distances

Main article: Lepidoptera migration

See also Insect migration

Many butterflies migrate over long distances. Particularly famous migrations are those of the Monarch
butterfly from Mexico to northern USA and southern Canada, a distance of about 4000 to 4800 km
(2500–3000 miles). Other well known migratory species include the Painted Lady and several of the
Danaine butterflies. Spectacular and large scale migrations associated with the Monsoons are seen in
peninsular India.[31] Migrations have been studied in more recent times using wing tags and also using
stable hydrogen isotopes.[32][33]

Butterflies have been shown to navigate using time compensated sun compasses. They can see polarized
light and therefore orient even in cloudy conditions. The polarized light in the region close to the
ultraviolet spectrum is suggested to be particularly important.[34]

It is suggested that most migratory butterflies are those that belong to semi-arid areas where breeding
seasons are short.[35] The life-histories of their host plants also influence the strategies of the butterflies.[36]

Defense

22
The wings of a butterfly (Leopard Lacewing Cethosia cyane) become increasingly damaged as it ages,
and do not repair

Butterflies are threatened in their early stages by parasitoids and in all stages by predators, diseases and
environmental factors. They protect themselves by a variety of means.

Chemical defenses are widespread and are mostly based on chemicals of plant origin. In many cases the
plants themselves evolved these toxic substances as protection against herbivores. Butterflies have
evolved mechanisms to sequester these plant toxins and use them instead in their own defense.[37] These
defense mechanisms are effective only if they are also well advertised and this has led to the evolution of
bright colours in unpalatable butterflies. This signal may be mimicked by other butterflies. These mimetic
forms are usually restricted to the females.

Eyespots on the wings of this butterfly are part of the animal's defense

Cryptic coloration is found in many butterflies. Some like the oakleaf butterfly are remarkable imitations
of leaves.[38] As caterpillars, many defend themselves by freezing and appearing like sticks or branches.
Some papilionid caterpillars resemble bird dropping in their early instars. Some caterpillars have hairs
and bristly structures that provide protection while others are gregarious and form dense aggregations.
Some species also form associations with ants and gain their protection (See Myrmecophile).

Behavioural defenses include perching and wing positions to avoid being conspicuous. Some female
Nymphalid butterflies are known to guard their eggs from parasitoid wasps.[39]

Eyespots and tails are found in many lycaenid butterflies and these divert the attention of predators from
the more vital head region. An alternative theory is that these cause ambush predators such as spiders to
approach from the wrong end and allow for early visual detection.[40]

A butterfly's hind wings are thought to allow the butterfly to take swift, tight turns to evade predators.[41]

Notable species

23
Rusty-tipped Page (Siproeta epaphus), Butterfly World (Florida)

There are between 15,000 and 20,000 species of butterflies worldwide. Some well-known species from
around the world include:

 Swallowtails and Birdwings, Family Papilionidae


o Common Yellow Swallowtail, Papilio machaon
o Spicebush Swallowtail, Papilio troilus
o Lime Butterfly, Papilio demoleus
o Ornithoptera genus (Birdwings; the largest butterflies)
 Whites and Yellows, Family Pieridae
o Small White, Pieris rapae
o Green-veined White, Pieris napi
o Common Jezebel, Delias eucharis
 Blues and Coppers or Gossamer-Winged Butterflies, Family Lycaenidae
o Xerces Blue, Glaucopsyche xerces (extinct)
o Karner Blue, Lycaeides melissa samuelis (endangered)
o Red Pierrot, Talicada nyseus
 Metalmark butterflies, Family Riodinidae
o Duke of Burgundy, Hamearis lucina
o Plum Judy, Abisara echerius
 Brush-footed butterflies, Family Nymphalidae
o Painted Lady, or Cosmopolitan, Vanessa cardui
o Monarch butterfly, Danaus plexippus
o Morpho genus
o Speckled Wood, Pararge aegeria
 Skippers, Family Hesperiidae
o Mallow Skipper, Carcharodus alceae
o Zabulon Skipper, Poanes zabulon

In culture

Art

Artistic depictions of butterflies have been used in many cultures including Egyptian hieroglyphs 3500
years ago.[42]

24
In the ancient Mesoamerican city of Teotihuacan, the brilliantly colored image of the butterfly was carved
into many temples, buildings, jewelry, and emblazoned on incense burners in particular. The butterfly was
sometimes depicted with the maw of a jaguar and some species were considered to be the reincarnations
of the souls of dead warriors. The close association of butterflies to fire and warfare persisted through to
the Aztec civilization and evidence of similar jaguar-butterfly images has been found among the Zapotec,
and Mayan civilizations.[43]

Today, butterflies are widely used in various objects of art and jewelry: mounted in frame, embedded in
resin, displayed in bottles, laminated in paper, and used in some mixed media artworks and furnishings.[44]
Butterflies have also inspired the "butterfly fairy" as an art and fictional character, including in the Barbie
Mariposa film.

Symbolism

According to Kwaidan: Stories and Studies of Strange Things, by Lafcadio Hearn, a butterfly was seen in
Japan as the personification of a person's soul; whether they be living, dying, or already dead. One
Japanese superstition says that if a butterfly enters your guestroom and perches behind the bamboo
screen, the person whom you most love is coming to see you. However, large numbers of butterflies are
viewed as bad omens. When Taira no Masakado was secretly preparing for his famous revolt, there
appeared in Kyoto so vast a swarm of butterflies that the people were frightened — thinking the
apparition to be a portent of coming evil.[45]

The Russian word for "butterfly", бабочка (bábochka), also means "bow tie". It is a diminutive of "baba"
or "babka" (= "woman, grandmother, cake"), whence also "babushka" = "grandmother".

The Ancient Greek word for "butterfly" is ψυχή (psȳchē), which primarily means "soul", "mind".[46]

According to Mircea Eliade's Encyclopedia of Religion, some of the Nagas of Manipur trace their
ancestry from a butterfly.[47]

Butterfly and Chinese wisteriaflowers, by Xü Xi (c.886–c.975), painted around 970 during the early Song
Dynasty.

25
In Chinese culture two butterflies flying together are a symbol of love. Also a famous Chinese folk story
called Butterfly Lovers. The Taoist philosopher Zhuangzi once had a dream of being a butterfly flying
without care about humanity, however when he woke up and realized it was just a dream, he thought to
himself "Was I before a man who dreamt about being a butterfly, or am I now a butterfly who dreams
about being a man?"

In some old cultures, butterflies also symbolize rebirth into a new life after being inside a cocoon for a
period of time.

Jose Rizal delivered a speech in 1884 in a banquet and mentioned "the Oriental chrysalis ... is about to
leave its cocoon" comparing the emergence of a "new Philippines" with that of butterfly
metamorphosis.[48] He has also often used the butterfly imagery in his poems and other writings to express
the Spanish Colonial Filipinos' longing for liberty.[49] Much later, in a letter to Ferdinand Blumentritt,
Rizal compared his life in exile to a weary butterfly with sun-burnt wings.[50]

Der Schmetterlingsjäger (The butterfly hunter) by Carl Spitzweg (1840), a depiction from the era of
butterfly collection.

Some people say that when a butterfly lands on you it means good luck.[citation needed] However, in
Devonshire, people would traditionally rush around to kill the first butterfly of the year that they see, or
else face a year of bad luck.[51] Also, in the Philippines, a lingering black butterfly or moth in the house is
taken to mean that someone in the family has died or will soon die.[52]

The idiom "butterflies in the stomach" is used to describe a state of nervousness.

In the NBC television show Kings, butterflies are the national symbol of the fictional nation of Gilboa and
a sign of God's favor.

Technological inspiration

Researches on the wing structure of Palawan Birdwing butterflies led to new wide wingspan kite and
aircraft designs.[53]

26
Studies on the reflection and scattering of light by the scales on wings of swallowtail butterflies led to the
innovation of more efficient light-emitting diodes.[54]

The structural coloration of butterflies is inspiring nanotechnology research to produce paints that do not
use toxic pigments and in the development of new display technologies.

The discoloration and health of butterflies in butterfly farms, is now being studied for use as indicators of
air quality in several cities.

Arthropod
From Wikipedia, the free encyclopedia

Jump to: navigation, search

Arthropod
Fossil range: 540–0 Ma

PreЄ

K
Pg

Cambrian – Recent

27
Extinct and modern arthropods

Scientific classification

Domain: Eukaryota

Kingdom: Animalia

Subkingdom: Eumetazoa

Superphylum: Ecdysozoa

Arthropoda
Phylum:
Latreille, 1829

Subphyla and Classes

28
 Subphylum Trilobitomorpha
o Trilobita – trilobites (extinct)
 Subphylum Chelicerata
o Arachnida – spiders, scorpions,
etc.
o Xiphosura – horseshoe crabs, etc.
o Pycnogonida – sea spiders
o Eurypterida – sea scorpions
(extinct)
 Subphylum Myriapoda
o Chilopoda – centipedes
o Diplopoda – millipedes
o Pauropoda
o Symphyla
 Subphylum Hexapoda
o Insecta – insects
o Entognatha
 Subphylum Crustacea
o Branchiopoda – brine shrimp etc.
o Remipedia
o Cephalocarida – horseshoe shrimp
o Maxillopoda – barnacles, fish lice,
etc.
o Ostracoda – seed shrimp
o Malacostraca – lobsters, crabs,
shrimp, etc.

An arthropod is an invertebrate animal having an exoskeleton (external skeleton), a segmented


body, and jointed appendages. Arthropods are members of the phylum Arthropoda (from Greek
ἄρθρον arthron, "joint", and ποδός podos "foot", which together mean "jointed feet"), and
include the insects, arachnids, crustaceans, and others. Arthropods are characterized by their
jointed limbs and cuticles, which are mainly made of α-chitin; the cuticles of crustaceans are also
biomineralized with calcium carbonate. The rigid cuticle inhibits growth, so arthropods replace it
periodically by molting. The arthropod body plan consists of repeated segments, each with a pair
of appendages. It is so versatile that they have been compared to Swiss Army knives, and it has
enabled them to become the most species-rich members of all ecological guilds in most
environments. They have over a million described species, making up more than 80% of all
described living animal species, and are one of only two animal groups that are very successful
in dry environments – the other being the amniotes. They range in size from microscopic
plankton up to forms a few meters long.

Arthropods' primary internal cavity is a hemocoel, which accommodates their internal organs
and through which their blood circulates; they have open circulatory systems. Like their
exteriors, the internal organs of arthropods are generally built of repeated segments. Their
nervous system is "ladder-like", with paired ventral nerve cords running through all segments

29
and forming paired ganglia in each segment. Their heads are formed by fusion of varying
numbers of segments, and their brains are formed by fusion of the ganglia of these segments and
encircle the esophagus. The respiratory and excretory systems of arthropods vary, depending as
much on their environment as on the subphylum to which they belong.

Their vision relies on various combinations of compound eyes and pigment-pit ocelli: in most
species the ocelli can only detect the direction from which light is coming, and the compound
eyes are the main source of information, but the main eyes of spiders are ocelli that can form
images and, in a few cases, can swivel to track prey. Arthropods also have a wide range of
chemical and mechanical sensors, mostly based on modifications of the many setae (bristles) that
project through their cuticles.

Arthropods' methods of reproduction and development are diverse; all terrestrial species use
internal fertilization, but this is often by indirect transfer of the sperm via an appendage or the
ground, rather than by direct injection. Aquatic species use either internal or external
fertilization. Almost all arthropods lay eggs, but scorpions give birth to live young after the eggs
have hatched inside the mother. Arthropod hatchlings vary from miniature adults to grubs and
caterpillars that lack jointed limbs and eventually undergo a total metamorphosis to produce the
adult form. The level of maternal care for hatchlings varies from zero to the prolonged care
provided by scorpions.

The versatility of the arthropod modular body plan has made it difficult for zoologists and
paleontologists to classify them and work out their evolutionary ancestry, which dates back to
the Cambrian period. From the late 1950s to late 1970s, it was thought that arthropods were
polyphyletic, that is, there was no single arthropod ancestor. Now they are generally regarded as
monophyletic. Historically, the closest evolutionary relatives of arthropods were considered to be
annelid worms, as both groups have segmented bodies. This hypothesis is by now largely
rejected, with annelids and molluscs forming the superphylum Lophotrochozoa. Many analyses
support a placement of arthropods with cycloneuralians (or their constituent clades) in a
superphylum Ecdysozoa. Overall however, the basal relationships of Metazoa are not yet well
resolved. Likewise, the relationships between various arthropod groups are still actively debated.

Although arthropods contribute to human food supply both directly as food and more
importantly as pollinators of crops, they also spread some of the most severe diseases and do
considerable damage to livestock and crops.

Contents
[hide]

 1 Description
o 1.1 Diversity
o 1.2 Segmentation
o 1.3 Exoskeleton
o 1.4 Molting
o 1.5 Internal organs

30
o 1.6 Senses
 2 Reproduction and development
 3 Evolution
o 3.1 Last common ancestor
o 3.2 Fossil record
o 3.3 Evolutionary family tree
 4 Classification of arthropods
 5 Interaction with humans
 6 Notes
 7 External links

[edit] Description
Arthropods are invertebrates with segmented bodies and jointed limbs.[1] The limbs form part of
an exoskeleton, which is mainly made of α-chitin, a derivative of glucose.[2] One other group of
animals, the tetrapods, has jointed limbs, but tetrapods are vertebrates and therefore have
endoskeletons.[3]

[edit] Diversity

One estimate indicates that arthropods have 1,170,000 described species, and account for over
80% of all known living animal species.[4] Another study estimates that there are between 5 to
10 million extant arthropod species, both described and yet to be described.[5] Estimating the
total number of living species is extremely difficult because it often depends on a series of
assumptions in order to scale up from counts at specific locations to estimates for the whole
world. A study in 1992 estimated that there were 500,000 species of animals and plants in Costa
Rica alone, of which 365,000 were arthropods.[6]

They are important members of marine, freshwater, land and air ecosystems, and are one of only
two major animal groups that have adapted to life in dry environments; the other is amniotes,
whose living members are reptiles, birds and mammals.[7] One arthropod sub-group, insects, is
the most species-rich member of all ecological guilds (ways of making a living) in land and
fresh-water environments.[6] The lightest insects weigh less than 25 micrograms (millionths of a
gram),[8] while the heaviest weigh over 70 grams (2.5 oz).[9] Some living crustaceans are much
larger, for example the legs of the Japanese spider crab may span up to 4 metres (13 ft).[8]

[edit] Segmentation

31
Head

_______________________

Thorax

_______________________

Abdomen

_______________________

Segments and tagmata of an arthropod[7]

= Body

= Coxa (base)

32
= Gill branch

// = Gill
filaments

= Leg
branch

Structure of a biramous appendage[10]

The embryos of all arthropods are segmented, built from a series of repeated modules. The last
common ancestor of living arthropods probably consisted of a series of undifferentiated
segments, each with a pair of appendages that functioned as limbs. However all known living
and fossil arthropods have grouped segments into tagmata in which segments and their limbs are
specialized in various ways;[7] The three-part appearance of many insect bodies and the two-part
appearance of spiders is a result of this grouping;[10] in fact there are no external signs of
segmentation in mites.[7] Arthropods also have two body elements that are not part of this serially
repeated pattern of segments, an acron at the front, ahead of the mouth, and a telson at the rear,
behind the anus. The eyes are mounted on the acron.[7]

The original structure of arthropod appendages was probably biramous, with the upper branch
acting as a gill while the lower branch was used for walking. In some segments of all known
arthropods the appendages have been modified, for example to form gills, mouth-parts, antennae
for collecting information,[10] or claws for grasping;[11] arthropods are "like Swiss Army knives,
each equipped with a unique set of specialized tools."[7] In many arthropods, appendages have
vanished from some regions of the body, and it is particularly common for abdominal
appendages to have disappeared or be highly modified.[7]

33
Trilobitomorpha

Chelicerata

Ci

Crustacea

34
Mnd

Mx

Mx

Tracheata

Mnd

Mx

Mx

= acron

= segments included in head

= body segments

x = lost during development

= eyes

= nephridia

O = nephridia lost during development

A = antenna

35
L = Leg

C = Chelicera

P = Pedipalp

Ci = Chilarium

Mnd = Mandible

Mx = Maxilla

The Arthropod head problem[12]

The most conspicuous specialization of segments is in the head. The four major groups of
arthropods – Chelicerata (includes spiders and scorpions), Crustacea (shrimps, lobsters, crabs,
etc.), Tracheata (arthropods that breathe via channels into their bodies; includes insects and
myriapods), and the extinct trilobites – have heads formed of various combinations of segments,
with appendages that are missing or specialized in different ways.[7] In addition some extinct
arthropods, such as Marrella, belong to none of these groups, as their heads are formed by their
own particular combinations of segments and specialized appendages.[13] Working out the
evolutionary stages by which all these different combinations could have appeared is so difficult
that it has long been known as "the Arthropod head problem".[14] In 1960 R.E. Snodgrass even
hoped it would not be solved, as trying to work out solutions was so much fun.[15]

[edit] Exoskeleton

Main article: Arthropod exoskeleton

36
Seta (bristle)

Epicuticle

Exocuticle

Endocuticle

Epidermis

= Biomineralization, only in crustaceans

= Trichogen cell, produces seta

= Gland cell, secretes epicuticle

37
Structure of arthropod cuticle[16]

Arthropod exoskeletons are made of cuticle, a non-cellular material secreted by the epidermis.[7]
Their cuticles vary in the details of their structure, but generally consist of three main layers: the
epicuticle, a thin outer waxy coat that moisture-proofs the other layers and gives them some
protection; the exocuticle, which consists of chitin and chemically hardened proteins; and the
endocuticle, which consists of chitin and unhardened proteins. The exocuticle and endocuticle
together are known as the procuticle.[17] Each body segment and limb section is encased in
hardened cuticle. The joints between body segments and between limb sections are covered by
flexible cuticle.[7]

The exoskeletons of most aquatic crustaceans are biomineralized with calcium carbonate
extracted from the water. Some terrestrial crustaceans have developed means of storing the
mineral, since on land they cannot rely on a steady supply of dissolved calcium carbonate.[18]
Biomineralization generally affects the exocuticle and the outer part of the endocuticle.[17] Two
recent hypotheses about the evolution of biomineralization in arthropods and other groups of
animals propose that it provides tougher defensive armor,[19] and that it allows animals to grow
larger and stronger by providing more rigid skeletons;[20] and in either case a mineral-organic
composite exoskeleton is cheaper to build than an all-organic one of comparable strength.[20][21]

The cuticle can have setae (bristles) growing from special cells in the epidermis. Setae are as
varied in form and function as appendages. For example, they are often used as sensors to detect
air or water currents, or contact with objects; aquatic arthropods use feather-like setae to increase
the surface area of swimming appendages and to filter food particles out of water; aquatic
insects, which are air-breathers, use thick felt-like coats of setae to trap air, extending the time
they can spend under water; heavy, rigid setae serve as defensive spines.[7]

38
Although all arthropods use muscles attached to the inside of the exoskeleton to flex their limbs,
some still use hydraulic pressure to extend them, a system inherited from their pre-arthropod
ancestors;[22] for example, all spiders extend their legs hydraulically and can generate pressures
up to eight times their resting level.[23]

[edit] Molting

The exoskeleton cannot stretch and thus restricts growth. Arthropods therefore replace their
exoskeletons by molting, or shedding the old exoskeleton after growing a new one that is not yet
hardened. Molting cycles run nearly continuously until an arthropod reaches full size.[24]

In the initial phase of molting, the animal stops feeding and its epidermis releases molting fluid,
a mixture of enzymes that digests the endocuticle and thus detaches the old cuticle. This phase
begins when the epidermis has secreted a new epicuticle to protect it from the enzymes, and the
epidermis secretes the new exocuticle while the old cuticle is detaching. When this stage is
complete, the animal makes its body swell by taking in a large quantity of water or air, and this
makes the old cuticle split along predefined weaknesses where the old exocuticle was thinnest. It
commonly takes several minutes for the animal to struggle out of the old cuticle. At this point the
new one is wrinkled and so soft that the animal cannot support itself and finds it very difficult to
move, and the new endocuticle has not yet formed. The animal continues to pump itself up to
stretch the new cuticle as much as possible, then hardens the new exocuticle and eliminates the
excess air or water. By the end of this phase the new endocuticle has formed. Many arthropods
then eat the discarded cuticle to reclaim its materials.[24]

Because arthropods are unprotected and nearly immobilized until the new cuticle has hardened,
they are in danger both of being trapped in the old cuticle and of being attacked by predators.
Molting may be responsible for 80 to 90% of all arthropod deaths.[24]

[edit] Internal organs

= heart

= gut

= brain, nerve cord, ganglia

O = eye

Basic arthropod body structure[25]

39
Arthropod bodies are also segmented internally, and the nervous, muscular, circulatory and
excretory systems have repeated components.[7] Arthopods come from a lineage of animals that
have a coelom, a membrane-lined cavity between the gut and the body wall that accommodates
the internal organs. The strong, segmented limbs of arthropods eliminate the need for one of the
coelom's main ancestral functions, as a hydrostatic skeleton, which muscles compress in order to
change the animal's shape and thus enable it to move. Hence the coelom of the arthropod is
reduced to small areas around the reproductive and excretory systems. Its place is largely taken
by a hemocoel, a cavity that runs most of the length of the body and through which blood
flows.[26]

Arthropods have open circulatory systems, although most have a few short, open-ended arteries.
In chelicerates and crustaceans, the blood carries oxygen to the tissues, while hexapods use a
separate system of tracheae. Many crustaceans, but few chelicerates and tracheates, use
respiratory pigments to assist oxygen transport. The most common respiratory pigment in
arthropods is copper-based hemocyanin; this is used by many crustaceans and a few centipedes.
A few crustaceans and insects use iron-based hemoglobin, the respiratory pigment used by
vertebrates. As with other invertebrates and unlike among vertebrates, the respiratory pigments
of those arthropods that have them are generally dissolved in the blood and rarely enclosed in
corpuscles.[26]

The heart is typically a muscular tube that runs just under the back and for most of the length of
the hemocoel. It contracts in ripples that run from rear to front, pushing blood forwards. Elastic
ligaments, or small muscles, connect the heart to the body wall and expand sections that are not
being squeezed by the heart muscle. Along the heart run a series of paired ostia, non-return
valves that allow blood to enter the heart but prevent it from leaving before it reaches the
front.[26]

Arthropods have a wide variety of respiratory systems. Small species often do not have any,
since their high ratio of surface area to volume enables simple diffusion through the body surface
to supply enough oxygen. Crustacea usually have gills that are modified appendages. Many
arachnids have book lungs. Tracheae, systems of branching tunnels that run from the openings in
the body walls, deliver oxygen directly to individual cells in many insects, myriapods and
arachnids.[27]

Living arthropods have paired main nerve cords running along their bodies below the gut, and in
each segment the cords form a pair of ganglia from which sensory and motor nerves run to other
parts of the segment. Although the pairs of ganglia in each segment often appear physically
fused, they are connected by commissures (relatively large bundles of nerves), which give
arthropod nervous systems a characteristic "ladder-like" appearance. The brain is in the head,
encircling and mainly above the esophagus. It consists of the fused ganglia of the acron and one
or two of the foremost segments that form the head – a total of three pairs of ganglia in most
arthropods, but only two in chelicerates, which do not have antennae or the ganglion connected
to them. The ganglia of other head segments are often close to the brain and function as part of it.
In insects these other head ganglia combine into a pair of subesophageal ganglia, under and
behind the esophagus. Spiders take this process a step further, as all the segmental ganglia are

40
incorporated into the subesophageal ganglia, which occupy most of the space in the
cephalothorax (front "super-segment").[28]

There are two different types of arthropod excretory systems. In aquatic arthropods, the end-
product of biochemical reactions that metabolise nitrogen is ammonia, which is so toxic that it
needs to be diluted as much as possible with water. The ammonia is then eliminated via any
permeable membrane, mainly through the gills. All crustaceans use this system, and its high
consumption of water may be responsible for the relative lack of success of crustaceans as land
animals.[29] Various groups of terrestrial arthropods have independently developed a different
system: the end-product of nitrogen metabolism is uric acid, which can be excreted as dry
material; Malpighian tubules filter the uric acid and other nitrogenous waste out of the blood in
the hemocoel, and dump these materials into the hindgut, from which they are expelled as
feces.[29] Most aquatic arthropods and some terrestrial ones also have organs called nephridia
("little kidneys"), which extract other wastes for excretion as urine.[29]

[edit] Senses

Main article: Arthropod eye

The stiff cuticles of arthropods would block out information about the outside world, except that
they are penetrated by many sensors or connections from sensors to the nervous system. In fact,
arthropods have modified their cuticles into elaborate arrays of sensors. Various touch sensors,
mostly setae, respond to different levels of force, from strong contact to very weak air currents.
Chemical sensors provide equivalents of taste and smell, often by means of setae. Pressure
sensors often take the form of membranes that function as eardrums, but are connected directly
to nerves rather than to auditory ossicles. The antennae of most hexapods include sensor
packages that monitor humidity, moisture and temperature.[30]

Head of a wasp with three ocelli (centre), and compound eyes at the left and right

Most arthropods have sophisticated visual systems that include one or more usually both of
compound eyes and pigment-cup ocelli ("little eyes"). In most cases ocelli are only capable of
detecting the direction from which light is coming, using the shadow cast by the walls of the cup.
However the main eyes of spiders are pigment-cup ocelli that are capable of forming images,[30]
and those of jumping spiders can rotate to track prey.[31]

41
Compound eyes consist of fifteen to several thousand independent ommatidia, columns that are
usually hexagonal in cross section. Each ommatidium is an independent sensor, with its own
light-sensitive cells and often with its own lens and cornea.[30] Compound eyes have a wide field
of view, and can detect fast movement and, in some cases, the polarization of light.[32] On the
other hand the relatively large size of ommatidia makes the images rather coarse, and compound
eyes are shorter-sighted than those of birds and mammals – although this is not a severe
disadvantage, as objects and events within 20 centimetres (7.9 in) are most important to most
arthropods.[30] Several arthropods have color vision, and that of some insects has been studied in
detail; for example, the ommatidia of bees contain receptors for both green and ultra-violet.[30]

Most arthropods lack balance and acceleration sensors, and rely on their eyes to tell them which
way is up. The self-righting behavior of cockroaches is triggered when pressure sensors on the
underside of the feet report no pressure. However many malacostracan crustaceans have
statocysts, which provide the same sort of information as the balance and motion sensors of the
vertebrate inner ear.[30]

The proprioceptors of arthropods, sensors that report the force exerted by muscles and the degree
of bending in the body and joints, are well understood. However, little is known about what other
internal sensors arthropods may have.[30]

[edit] Reproduction and development

Compsobuthus werneri female with young (white)

A few arthropods, such as barnacles, are hermaphroditic, that is, each can have the organs of
both sexes. However, individuals of most species remain of one sex all their lives.[33] A few
species of insects and crustaceans can reproduce by parthenogenesis, for example, without
mating, especially if conditions favor a "population explosion". However most arthropods rely
on sexual reproduction, and parthenogenetic species often revert to sexual reproduction when
conditions become less favorable.[34] Aquatic arthropods may breed by external fertilization, as
for example frogs also do, or by internal fertilization, where the ova remain in the female's body
and the sperm must somehow be inserted. All known terrestrial arthropods use internal
fertilization, as unprotected sperm and ova would not survive long in these environments. In a
few cases the sperm transfer is direct from the male's penis to the female's oviduct, but it is more
often indirect. Some crustaceans and spiders use modified appendages to transfer the sperm to
the female. On the other hand, many male terrestrial arthropods produce spermatophores,
waterproof packets of sperm, which the females take into their bodies. A few such species rely

42
on females to find spermatophores that have already been deposited on the ground, but in most
cases males only deposit spermatophores when complex courtship rituals look likely to be
successful.[33]

The nauplius larva of a prawn

Most arthropods lay eggs,[33] but scorpions are viviparous: they produce live young after the eggs
have hatched inside the mother, and are noted for prolonged maternal care.[35] Newly born
arthropods have diverse forms, and insects alone cover the range of extremes. Some hatch as
apparently miniature adults (direct development), and in some cases, such as silverfish, the
hatchlings do not feed and may be helpless until after their first molt. Many insects hatch as
grubs or caterpillars, which do not have segmented limbs or hardened cuticles, and
metamorphose into adult forms by entering an inactive phase in which the larval tissues are
broken down and re-used to build the adult body.[36] Dragonfly larvae have the typical cuticles
and jointed limbs of arthropods but are flightless water-breathers with extendable jaws.[37]
Crustaceans commonly hatch as tiny nauplius larvae that have only three segments and pairs of
appendages.[33]

[edit] Evolution
[edit] Last common ancestor

The last common ancestor of all arthropods is reconstructed as a modular organism with each
module covered by its own sclerite (armor plate) and bearing a pair of biramous limbs.[38]
Whether the ancestral limb was uniramous or biramous is far from a settled debate, though. This
Ur-arthropod had a ventral mouth, pre-oral antennae and dorsal eyes at the front of the body. It
was a non-discriminatory sediment feeder, processing whatever sediment came its way for
food.[38]

[edit] Fossil record

43
Marrella, one of the puzzling arthropods from the Burgess Shale

It has been proposed that the Ediacaran animals Parvancorina and Spriggina, from around 555
Mya, were arthropods.[39][40][41] Small arthropods with bivalve-like shells have been found in
Early Cambrian fossil beds dating 541 to 539 million years ago in China.[42][43] The earliest
Cambrian trilobite fossils are about 530 million years old, but the class was already quite diverse
and worldwide, suggesting that they had been around for quite some time.[44] Re-examination in
the 1970s of the Burgess Shale fossils from about 505 million years ago identified many
arthropods, some of which could not be assigned to any of the well-known groups, and thus
intensified the debate about the Cambrian explosion.[45][46][47] A fossil of Marrella from the
Burgess Shale has provided the earliest clear evidence of molting.[48]

The earliest fossil crustaceans date from about 513 million years ago in the Cambrian,[49] and
fossil shrimp from about 500 million years ago apparently formed a tight-knit procession across
the seabed.[50] Crustacean fossils are common from the Ordovician period onwards.[51] They have
remained almost entirely aquatic, possibly because they never developed excretory systems that
conserve water.[29]

Arthropods provide the earliest identifiable fossils of land animals, from about 419 million years
ago in the Late Silurian, and terrestrial tracks from about 450 million years ago appear to have
been made by arthropods.[52] Arthropods were well pre-adapted to colonize land, because their
existing jointed exoskeletons provided protection against desiccation, support against gravity and
a means of locomotion that was not dependent on water.[53] Around the same time the aquatic,
scorpion-like eurypterids became the largest ever arthropods, some as long as 2.5 metres (8.2
ft).[54]

The oldest known arachnid is the trigonotarbid Palaeotarbus jerami, from about 420 million
years ago in the Silurian period.[55] Attercopus fimbriunguis, from 386 million years ago in the
Devonian period, bears the earliest known silk-producing spigots, but its lack of spinnerets
means it was not one of the true spiders,[56] which first appear in the Late Carboniferous over 299
million years ago.[57] The Jurassic and Cretaceous periods provide a large number of fossil
spiders, including representatives of many modern families.[58] Fossils of aquatic scorpions with
gills appear in the Silurian and Devonian periods, and the earliest fossil of an air-breathing
scorpion with book lungs dates from the Early Carboniferous period.[59]

The oldest definitive insect fossil is the Devonian Rhyniognatha hirsti, dated at 396 to 407
million years ago, but its mandibles are of a type found only in winged insects, which suggests

44
that the earliest insects appeared in the Silurian period.[60] The Mazon Creek lagerstätten from
the Late Carboniferous, about 300 million years ago, include about 200 species, some gigantic
by modern standards, and indicate that insects had occupied their main modern ecological niches
as herbivores, detritivores and insectivores. Social termites and ants first appear in the Early
Cretaceous, and advanced social bees have been found in Late Cretaceous rocks but did not
become abundant until the Mid Cenozoic.[61]

[edit] Evolutionary family tree

The velvet worm (Onychophora) is closely related to arthropods[62]

From the late 1950s to the late 1970s, Sidnie Manton and others argued that arthropods are
polyphyletic, in other words, they do not share a common ancestor that was itself an arthropod.
Instead, they proposed that three separate groups of "arthropods" evolved separately from
common worm-like ancestors: the chelicerates, including spiders and scorpions; the crustaceans;
and the uniramia, consisting of onychophorans, myriapods and hexapods. These arguments
usually bypassed trilobites, as the evolutionary relationships of this class were unclear.
Proponents of polyphyly argued the following: that the similarities between these groups are the
results of convergent evolution, as natural consequences of having rigid, segmented
exoskeletons; that the three groups use different chemical means of hardening the cuticle; that
there were significant differences in the construction of their compound eyes; that it is hard to
see how such different configurations of segments and appendages in the head could have
evolved from the same ancestor; and that crustaceans have biramous limbs with separate gill and
leg branches, while the other two groups have uniramous limbs in which the single branch serves
as a leg.[63]

onychophorans,
including Aysheaia and Peripatus
armored lobopods,
including Hallucigenia and Microdictyon
anomalocarid-like taxa,
including modern tardigrades as

45
well as extinct animals like
Kerygmachela and Opabinia
Anomalocaris
arthropods,
including living groups and
extinct forms such as trilobites
Simplified summary of Budd's "broad-scale" cladogram (1996)[62]

Further analysis and discoveries in the 1990s reversed this view, and led to acceptance that
arthropods are monophyletic, in other words they do share a common ancestor that was itself an
arthropod.[64][65] For example Graham Budd's analyses of Kerygmachela in 1993 and of
Opabinia in 1996 convinced him that these animals were similar to onychophorans and to
various Early Cambrian "lobopods", and he presented an "evolutionary family tree" that showed
these as "aunts" and "cousins" of all arthropods.[62][66] These changes made the scope of the term
"arthropod" unclear, and Claus Nielsen proposed that the wider group should be labelled
"Panarthropoda" ("all the arthropods") while the animals with jointed limbs and hardened
cuticles should be called "Euarthropoda" ("true arthropods").[67]

A contrary view was presented in 2003, when Jan Bergström and Xian-Guang Hou argued that,
if arthropods were a "sister-group" to any of the anomalocarids, they must have lost and then re-
evolved features that were well-developed in the anomalocarids. The earliest known arthropods
ate mud in order to extract food particles from it, and possessed variable numbers of segments
with unspecialized appendages that functioned as both gills and legs. Anomalocarids were, by
the standards of the time, huge and sophisticated predators with specialized mouths and grasping
appendages, fixed numbers of segments some of which were specialized, tail fins, and gills that
were very different from those of arthropods. This reasoning implies that Parapeytoia, which has
legs and a backward-pointing mouth like that of the earliest arthropods, is a more credible closest
relative of arthropods than is Anomalocaris.[68] In 2006, they suggested that arthropods were
more closely related to lobopods and tardigrades than to anomalocarids.[69]

Protostomes Lophotrochozoa (annelids, molluscs, brachiopods, etc.


Ecdysozoa Nematoida (nematodes and close relatives
Loricifera
Scalidophora (priapulids and Kinorhyncha)
Panarthropoda Onychophorans
Tardigrades
Euarthropoda Chelicerates
Mandibulata Euthycarcinoids
Myriapods
Crustaceans
Hexapods
Relationships of Ecdysozoa to each other and to annelids, etc.,[70] including Euthycarcinoids[71]

Higher up the "family tree", the Annelida have traditionally been considered the closest relatives
of the Panarthropoda, since both groups have segmented bodies, and the combination of these
groups was labelled Articulata. There had been competing proposals that arthropods were closely

46
related to other groups such as nematodes, priapulids and tardigrades, but these remained
minority views because it was difficult to specify in detail the relationships between these
groups.

In the 1990s, molecular phylogenetic analyses of DNA sequences produced a coherent scheme
showing arthropods as members of a superphylum labelled Ecdysozoa ("animals that molt"),
which contained nematodes, priapulids and tardigrades but excluded annelids. This was backed
up by studies of the anatomy and development of these animals, which showed that many of the
features that supported the Articulata hypothesis showed significant differences between annelids
and the earliest Panarthropods in their details, and some were hardly present at all in arthropods.
This hypothesis groups annelids with molluscs and brachiopods in another superphylum,
Lophotrochozoa.

If the Ecdysozoa hypothesis is correct, then segmentation of arthropods and annelids either has
evolved convergently or has been inherited from a much older ancestor and subsequently lost in
several other lineages, such as the non-arthropod members of the Ecdysozoa.[72][70]

[edit] Classification of arthropods


Euarthropoda Chelicerata
Myriapoda
Pancrustacea Cirripedia
Remipedia
Collembola
Branchiopoda
Cephalocarida
Malacostraca
Insecta
Phylogenetic relationships of the major extant arthropod groups, derived from mitochondrial DNA sequences.[73]
Highlighted taxa are parts of the subphylum Crustacea.

Euarthropods are typically classified into five subphyla, of which one is extinct:[74]

1. Trilobites are a group of formerly numerous marine animals that disappeared in the Permian-
Triassic extinction event, though they were in decline prior to this killing blow, having been
reduced to one order in the Late Devonian extinction.
2. Chelicerates include spiders, mites, scorpions and related organisms. They are characterised by
the presence of chelicerae, appendages just above / in front of the mouth. Chelicerae appear in
scorpions as tiny claws that they use in feeding, but those of spiders have developed as fangs
that inject venom.
3. Myriapods comprise millipedes, centipedes, and their relatives and have many body segments,
each bearing one or two pairs of legs. They are sometimes grouped with the hexapods.
4. Hexapods comprise insects and three small orders of insect-like animals with six thoracic legs.
They are sometimes grouped with the myriapods, in a group called Uniramia, though genetic
evidence tends to support a closer relationship between hexapods and crustaceans.

47
5. Crustaceans are primarily aquatic (a notable exception being woodlice) and are characterised by
having biramous appendages. They include lobsters, crabs, barnacles, crayfish, shrimp and many
others.

Aside from these major groups, there are also a number of fossil forms, mostly from the Early
Cambrian, which are difficult to place, either from lack of obvious affinity to any of the main
groups or from clear affinity to several of them. Marrella was the first one to be recognized as
significantly different from the well-known groups.[13]

The phylogeny of the major extant arthropod groups has been an area of considerable interest
and dispute.[75] The most recent studies tend to suggest a paraphyletic Crustacea with different
hexapod groups nested within it. Myriapoda is grouped with Chelicerata in some recent studies
(forming Myriochelata),[73][76] and with Pancrustacea in other studies (forming Mandibulata).[77]
The placement of the extinct trilobites is also a frequent subject of dispute.[78]

Since the International Code of Zoological Nomenclature recognises no priority above the rank
of family, many of the higher-level groups can be referred to by a variety of different names.[79]

[edit] Interaction with humans

Insects and scorpions on sale in a food stall in Bangkok

Crustaceans such as crabs, lobsters, crayfish, shrimps and prawns have long been part of human
cuisine, and are now farmed on a large commercial scale.[80] Insects and their grubs are at least as
nutritious as meat, and are eaten both raw and cooked in many non-European cultures.[81]
Cooked tarantulas are considered a delicacy in Cambodia,[82] and by the Piaroa Indians of
southern Venezuela, after the highly irritant hairs – the spider's main defense system – are
removed.[83] Humans also unintentionally eat arthropods in other foods,[84] and food safety
regulations lay down acceptable contamination levels for different kinds of food material.[85][86]
The intentional cultivation of arthropods and other small animal for human food, referred to as
minilivestock, is now emerging in animal husbandry as an ecologically sound concept.[87]

However, the greatest contribution of arthropods to human food supply is by pollination: a 2008
study examined the 100 crops that FAO lists as grown for food, and estimated pollination's

48
economic value as €153 billion, or 9.5% of the value of world agricultural production used for
human food in 2005.[88] Besides pollinating, bees produce honey, which is the basis of a rapidly
growing industry and international trade.[89]

The red dye cochineal, produced from a Central American species of insect, was economically
important to the Aztecs and Mayans,[90] and while the region was under Spanish control,
becoming Mexico's second most-lucrative export;[91] and it is now regaining some of the ground
it lost to synthetic competitors.[92] The blood of horseshoe crabs contains a clotting agent
Limulus Amebocyte Lysate which is now used to test that antibiotics and kidney machines are
free of dangerous bacteria, and to detect spinal meningitis and some cancers.[93] Forensic
entomology uses evidence provided by arthropods to establish the time and sometimes the place
of death of a human, and in somes cases the cause.[94]

The relative simplicity of the arthropods' body plan, allowing them to move on a variety of
surfaces both on land and in water, have made them useful as models for robotics. The
redundancy provided by segments allows arthropods and biomimetic robots to move normally
even with damaged or lost appendages.[95][96]

Diseases transmitted by insects

Disease[97] Insect Cases per year Deaths per year

Malaria Anopheles mosquito 267 M 1 to 2 M

Yellow fever Aedes mosquito 4,432 1,177

Filariasis Culex mosquito 250 M unknown

Although arthropods are the most numerous phylum on Earth, and thousands of arthropod
species are venomous, they inflict relatively few serious bites and stings on humans. Far more
serious are the effects on humans of diseases carried by blood-sucking insects. Other blood-
sucking insects infect livestock with diseases that kill many animals and greatly reduce the
usefulness of others.[97] Ticks can cause tick paralysis and several parasite-borne diseases in
humans.[98] A few of the closely related mites also infest humans, causing intense itching,[99] and
others cause allergic diseases, including hay fever, asthma and eczema.[100]

Many species of arthropods, principally insects but also mites, are agricultural and forest
pests.[101][102] The mite Varroa destructor has become the largest single problem faced by
beekeepers worldwide.[103] Efforts to control arthropod pests by large-scale use of pesticides
have caused long term effects on human health and on biodiversity.[104] Increasing arthropod
resistance to pesticides has led to the development of integrated pest management using a wide
range of meaures including biological control.[101] Predatory mites may be useful in controlling
some mite pests.[105][106]

49
CNIDARIA (CELENTERATA)
From Wikipedia, the free encyclopedia
Jump to: navigation, search

Scientific classification

Domain: Eukaryota

Kingdom: Animalia

Cnidaria
Phylum:
Hatschek, 1888

Cnidaria (pronounced /naɪˈdɛəriə/ with a silent c) is a phylum containing over 9,000 species of animals
found exclusively in aquatic and mostly marine environments. Their distinguishing feature is cnidocytes,
specialized cells that they use mainly for capturing prey. Their bodies consist of mesoglea, a non-living
jelly-like substance, sandwiched between two layers of epithelium that are mostly one cell thick. They
have two basic body forms: swimming medusae and sessile polyps, both of which are radially
symmetrical with mouths surrounded by tentacles that bear cnidocytes. Both forms have a single orifice
and body cavity that are used for digestion and respiration. Many cnidarian species produce colonies that
are single organisms composed of medusa-like or polyp-like zooids, or both. Cnidarians' activities are
coordinated by a decentralized nerve net and simple receptors. Several free-swimming Cubozoa and
Scyphozoa possess balance-sensing statocysts, and some have simple eyes. Not all cnidarians reproduce
sexually. Many have complex lifecycles with asexual polyp stages and sexual medusae, but some omit
either the polyp or the medusa stage.

Cnidarians were for a long time grouped with Ctenophores in the phylum Coelenterata, but increasing
awareness of their differences caused them to be placed in separate phyla. Cnidarians are classified into
four main groups: sessile Anthozoa (sea anemones, corals, and sea pens (sea anemones are not sessile but
only move 3-4 inces an hour); swimming Scyphozoa (jellyfish); Cubozoa (box jellies); and Hydrozoa, a
diverse group that includes all the freshwater cnidarians as well as many marine forms, and has both
sessile members such as Hydra and colonial swimmers such as the Portuguese Man o' War. Staurozoa
have recently been recognised as a class in their own right rather than a sub-group of Scyphozoa, and
there is debate about whether Myxozoa and Polypodiozoa are cnidarians or closer to bilaterians (more
complex animals).

Most cnidarians prey on organisms ranging in size from plankton to animals several times larger than
themselves, but many obtain much of their nutrition from endosymbiotic algae, and a few are parasites.
Many are preyed upon by other animals including starfish, sea slugs, fish and turtles. Coral reefs, whose
polyps are rich in endosymbiotic algae, support some of the world's most productive ecosystems, and
protect vegetation in tidal zones and on shorelines from strong currents and tides. While corals are almost
entirely restricted to warm, shallow marine waters, other cnidarians live in the depths, in polar seas and in
freshwater.

50
Fossil cnidarians have been found in rocks formed about 580 million years ago, and other fossils show
that corals may have been present shortly before 490 million years ago and diversified a few million years
later. Fossils of cnidarians that do not build mineralized structures are very rare. Scientists currently think
that cnidarians, ctenophores and bilaterians are more closely related to calcareous sponges than these are
to other sponges, and that anthozoans are the evolutionary "aunts" or "sisters" of other cnidarians, and the
most closely related to bilaterians. Recent analyses have concluded that cnidarians, although considered
more "primitive" than bilaterians, have a wider range of genes.

Jellyfish stings killed several hundred people in the 20th century, and cubozoans are particularly
dangerous. On the other hand, some large jellyfish are considered a delicacy in eastern and southern Asia.
Coral reefs have long been economically important as providers of fishing grounds, protectors of shore
buildings against currents and tides, and more recently as centers of tourism. However, they are
vulnerable to over-fishing, mining for construction materials, pollution, and damage caused by tourism.

[edit] Classification

Cnidarians were for a long time grouped with Ctenophores in the phylum Coelenterata, but increasing
awareness of their differences caused them to be placed in separate phyla. Cnidarians are classified into
four main groups: sessile Anthozoa (sea anemones, corals, sea pens); swimming Scyphozoa (jellyfish);
Cubozoa (box jellies); and Hydrozoa, a diverse group that includes all the freshwater cnidarians as well as
many marine forms, and has both sessile members such as Hydra and colonial swimmers such as the
Portuguese Man o' War. Staurozoa have recently been recognised as a class in their own right rather than
a sub-group of Scyphozoa, and there is debate about whether Myxozoa and Polypodiozoa are cnidarians
or closer to bilaterians.

Modern cnidarians are generally classified into four classes:[4]

Hydrozoa Scyphozoa Cubozoa Anthozoa


Number of species 2,700 200 20 6,000
Hydra, Box Sea anemones,
Examples Jellyfish
siphonophores jellies corals, sea pens
Cells found in mesoglea No Yes Yes Yes
Nematocysts in
No Yes Yes Yes
exodermis
Medusa phase in life Yes, except for Stauromedusae
In some species Yes No
cycle if they are scyphozoans
Number of medusae
Many Many One (not applicable)
produced per polyp

Stauromedusae, small sessile cnidarians with stalks and no medusa stage, have traditionally been
classified as members of the Scyphozoa, but recent research suggests they should be regarded as a
separate class, Staurozoa.[5]

The Myxozoa, microscopic parasites, were first classified as protozoans,[6] but recently as heavily
modified cnidarians, and more closely related to Hydrozoa and Scyphozoa than to Anthozoa. [7] However

51
other recent research suggests that Polypodium hydriforme, a parasite within the egg cells of sturgeon, is
closely related to the Myxozoa and that both Polypodium and the Myxozoa are intermediate between
cnidarians and bilaterian animals.[8]

Some researchers classify the extinct conulariids as cnidarians, while others propose that they form a
completely separate phylum.[9]

[edit] Ecology

Coral reefs support rich ecosystems

Many cnidarians are limited to shallow waters because they depend on endosymbiotic algae for much of
their nutrients. The life cycles of most have polyp stages, which are limited to locations that offer stable
substrates. Nevertheless major cnidarian groups contain species that have escaped these limitations.
Hydrozoans have a worldwide range: some, such as Hydra, live in freshwater; Obelia appears in the
coastal waters of all the oceans; and Liriope can form large shoals near the surface in mid-ocean. Among
anthozoans, a few scleractinian corals, sea pens and sea fans live in deep, cold waters, and some sea
anemones inhabit polar seabeds while others live near hydrothermal vents over 10 kilometres (6.2 mi)
below sea-level. Reef-building corals are limited to tropical seas between 30°N and 30°S with a
maximum depth of 46 metres (151 ft), temperatures between 20°C and 28°C, high salinity and low carbon
dioxide levels. Stauromedusae, although usually classified as jellyfish, are stalked, sessile animals that
live in cool to Arctic waters.[10] Cnidarians range in size from Hydra, 5–20 millimetres (0.20–0.79 in)
long,[11] to the Lion's mane jellyfish, which may exceed 2 metres (6.6 ft) in diameter and 75 metres (246
ft) in length.[12]

Prey of cnidarians ranges from plankton to animals several times larger than themselves. [10][13] Some
cnidarians are parasites, mainly on jellyfish but a few are major pests of fish.[10] Others obtain most of
their nourishment from endosymbiotic algae or dissolved nutrients.[4] Predators of cnidarians include: sea
slugs, which can incorporate nematocysts into their own bodies for self-defense;[14] starfish, notably the
crown of thorns starfish, which can devastate corals;[10] butterfly fish and parrot fish, which eat corals;[15]
and marine turtles, which eat jellyfish.[12] Some sea anemones and jellyfish have a symbiotic relationship
with some fish; for example clown fish live among the tentacles of sea anemones, and each partner
protects the other against predators.[10]

Coral reefs form some of the world's most productive ecosystems. Common coral reef cnidarians include
both Anthozoans (hard corals, octocorals, anemones) and Hydrozoans (fire corals, lace corals) The
endosymbiotic algae of many cnidarian species are very effective primary producers, in other words
converters of inorganic chemicals into organic ones that other organisms can use, and their coral hosts use

52
these organic chemicals very efficiently. In addition reefs provide complex and varied habitats that
support a wide range of other organisms.[16] "Fringing" reefs just below low-tide level also have a
mutually beneficial relationship with mangrove forests at high-tide level and sea grass meadows in
between: the reefs protect the mangroves and seagrass from strong currents and waves that would damage
them or erode the sediments in which they are rooted, while the mangroves and seagrass protect the coral
from large influxes of silt, fresh water and pollutants. This additional level of variety in the environment
is beneficial to many types of coral reef animals, which for example may feed in the sea grass and use the
reefs for protection or breeding.[17]

[edit] History

Fossil cnidarians have been found in rocks formed about 580 million years ago, and other fossils show
that corals may have been present shortly before 490 million years ago and diversified a few million years
later. Fossils of cnidarians that do not build mineralized structures are very rare. Scientists currently think
that cnidarians, ctenophores and bilaterians are more closely related to calcareous sponges than these are
to other sponges, and that anthozoans are the evolutionary "aunts" or "sisters" of other cnidarians, and the
most closely related to bilaterians. Recent analyses have concluded that cnidarians, although considered
more "primitive" than bilaterians, have a wider range of genes.

[edit] Distinguishing features

Further information: Sponge, Ctenophore, and Bilateria

Cnidarians form an animal phylum that is more complex than sponges, about as complex as ctenophores
(comb jellies), and less complex than bilaterians, which include almost all other animals. However, both
cnidarians and ctenophores are more complex than sponges as they have: cells bound by inter-cell
connections and carpet-like basement membranes; muscles; nervous systems; and some have sensory
organs. Cnidarians are distinguished from all other animals by having cnidocytes that fire like harpoons
and are used mainly to capture prey but also as anchors in some species.[4]

Like sponges and ctenophores, cnidarians have two main layers of cells that sandwich a middle layer of
jelly-like material, which is called the mesoglea in cnidarians; more complex animals have three main cell
layers and no intermediate jelly-like layer. Hence, cnidarians and ctenophores have traditionally been
labelled diploblastic, along with sponges.[4][18] However, both cnidarians and ctenophores have a type of
muscle that, in more complex animals, arises from the middle cell layer.[19] As a result some recent text
books classify ctenophores as triploblastic,[20] and it has been suggested that cnidarians evolved from
triploblastic ancestors.[19]

Sponges[21][22] Cnidarians[4][18] Ctenophores[4][20] Bilateria[4]


Cnidocytes No Yes No
Colloblasts No Yes No
Digestive and
No Yes
circulatory organs
Number of main cell Two[4] or
Two, with jelly-like layer between them Three
layers Three[19][20]
No, except that
Cells in each layer
Homoscleromorpha have Yes: inter-cell connections; basement membranes
bound together
basement membranes.[23]

53
Sensory organs No Yes
Number of cells in (Not
Many Few
middle "jelly" layer applicable)
Cells in outer layers
can move inwards (Not
Yes No
and change applicable)
functions
Simple to
Nervous system No Yes, simple
complex
Mostly Mostly Mostly
Muscles None
epitheliomuscular myoepithelial myocytes

[edit] Description

[edit] Main cell layers

Cnidaria are diploblastic animals, in other words they have two main cell layers, while more complex
animals are triploblasts having three main layers. The two main cell layers of cnidarians form epithelia
that are mostly one cell thick, and are attached to a fibrous basement membrane, which they secrete. They
also secrete the jelly-like mesoglea that separates the layers. The layer that faces outwards, known as the
ectoderm ("outside skin"), generally contains the following types of cells:[4]

 Epitheliomuscular cells whose bodies form part of the epithelium but whose bases extend to form
muscle fibers in parallel rows.[24] The fibers of the outward-facing cell layer generally run at right
angles to the fibers of the inward-facing one. In Anthozoa (anemones, corals, etc.) and Scyphozoa
(jellyfish), the mesoglea also contains some muscle cells.[18]
 Cnidocytes, the harpoon-like "nettle cells" that give the phylum Cnidaria its name. These appear
between or sometimes on top of the muscle cells.[4]
 Nerve cells. Sensory cells appear between or sometimes on top of the muscle cells,[4] and
communicate via synapses (gaps across which chemical signals flow) with motor nerve cells,
which lie mostly between the bases of the muscle cells.[18]
 Interstitial cells, which are unspecialized and can replace lost or damaged cells by transforming
into the appropriate types. These are found between the bases of muscle cells.[4]

In addition to epitheliomuscular, nerve and interstitial cells, the inward-facing gastroderm ("stomach
skin") contains gland cells that secrete digestive enzymes. In some species it also contains low
concentrations of cnidocytes, which are used to subdue prey that is still struggling.[4][18]

The mesoglea contains small numbers of amoeba-like cells,[18] and muscle cells in some species.[4]
However the number of middle-layer cells and types are much lower than in sponges.[18]

[edit] Cnidocytes

54
A hydra's nematocyst, before firing.
[18]
"trigger" cilium

Firing sequence of the cnida in a hydra's nematocyst[18]


Operculum (lid)
"Finger" that turns inside out
/ / / Barbs
Venom
Victim's skin
Victim's tissues

These "nettle cells" function as harpoons, since their payloads remain connected to the bodies of the cells
by threads. Three types of cnidocytes are known:[4][18]

 Nematocysts inject venom into prey, and usually have barbs to keep them embedded in the
victims. Most species have nematocysts.[4]
 Spirocysts do not penetrate the victim or inject venom, but entangle it by means of small sticky
hairs on the thread. Only members of the class Anthozoa (sea anemones and corals) have
spirocysts.[18]
 Ptychocysts are not used for prey capture — instead the threads of discharged ptychocysts are
used for building protective tubes in which their owners live. Ptychocysts are found only in the
order Cerianthria, tube anemones.[18]

The main components of a cnidocyte are:[4][18]

 A cilium (fine hair) which projects above the surface and acts as a trigger. Spirocysts do not have
cilia.

55
 A tough capsule, the cnida, which houses the thread, its payload and a mixture of chemicals
which may include venom or adhesives or both. ("cnida" is derived from the Greek word κνίδη,
which means "nettle"[25])
 A tube-like extension of the wall of the cnida that points into the cnida, like the finger of a rubber
glove pushed inwards. When a cnidocyte fires, the finger pops out. If the cell is a venomous
nematocyte, the "finger"'s tip reveals a set of barbs that anchor it in the prey.
 The thread, which is an extension of the "finger" and coils round it until the cnidocyte fires. The
thread is usually hollow and delivers chemicals from the cnida to the target.
 An operculum (lid) over the end of the cnida. The lid may be a single hinged flap or three flaps
arranged like slices of pie.
 The cell body which produces all the other parts.

It is difficult to study the firing mechanisms of cnidocytes as these structures are small but very complex.
At least four hypotheses have been proposed:[4]

 Rapid contraction of fibers round the cnida may increase its internal pressure.
 The thread may be like a coiled spring that extends rapidly when released.
 In the case of Chironex (the "sea wasp"), chemical changes in the cnida's contents may cause
them to expand rapidly by polymerization.
 Chemical changes in the liquid in the cnida make it a much more concentrated solution, so that
osmotic pressure forces water in very rapidly to dilute it. This mechanism has been observed in
nematocysts of the class Hydrozoa, sometimes producing pressures as high as 140 atmospheres,
similar to that of scuba air tanks, and fully extending the thread in as little as 2 milliseconds
(0.002 second).[18]

Cnidocytes can only fire once, and about 25% of a hydra's nematocysts are lost from its tentacles when
capturing a brine shrimp. Used cnidocytes have to be replaced, which takes about 48 hours. To minimise
wasteful firing, two types of stimulus are generally required to trigger cnidocytes: their cilia detect
contact, and nearby sensory cells "smell" chemicals in the water. This combination prevents them from
firing at distant or non-living objects. Groups of cnidocytes are usually connected by nerves and, if one
fires, the rest of the group requires a weaker minimum stimulus than the cells that fire first.[4][18]

[edit] Basic body forms

Aboral end
Oral end
Mouth
Oral end
Aboral end
Exoderm
Gastroderm (Endoderm)

56
Mesoglea
Digestive cavity

Medusa (left) and polyp (right)[18]

Oral end of actinodiscus polyp, with close-up of the mouth

Adult cnidarians appear as either swimming medusae or sessile polyps. Both are radially symmetrical,
like a wheel and a tube respectively. Since these animals have no heads, their ends are described as "oral"
(nearest the mouth) and "aboral" (furthest from the mouth). Most have fringes of tentacles equipped with
cnidocytes around their edges, and medusae generally have an inner ring of tentacles around the mouth.
The mesoglea of polyps is usually thin and often soft, but that of medusae is usually thick and springy, so
that it returns to its original shape after muscles around the edge have contracted to squeeze water out,
enabling medusae to swim by a sort of jet propulsion.[18]

[edit] Colonial forms

Tree-like polyp colony[18]

Cnidaria produce a variety of colonial forms, each of which is one organism but consists of polyp-like
zooids. The simplest is a connecting tunnel that runs over the substrate (rock or seabed) and from which
single zooids sprout. In some cases the tunnels form visible webs, and in others they are enclosed in a

57
fleshy mat. More complex forms are also based on connecting tunnels but produce "tree-like" groups of
zooids. The "trees" may be formed either by a central zooid that functions as a "trunk" with later zooids
growing to the sides as "branches", or in a zig-zag shape as a succession of zooids, each of which grows
to full size and then produces a single bud at an angle to itself. In many cases the connecting tunnels and
the "stems" are covered in periderm, a protective layer of chitin.[18] Some colonial forms have other
specialized types of zooid, for example, to pump water through their tunnels.[10]

Siphonophores form complex colonies that consist of: an upside-down polyp that forms a central stem
with a gas-filled float at the top; one or more sets of medusa-like zooids that provide propulsion; leaf-like
bracts that give some protection to other parts; sets of tentacles that bear nematocytes that capture prey;
other tentacles that act as sensors; near the base of each set of tentacles, a polyp-like zooid that acts as a
stomach for the colony; medusa-like zooids that serve as gonads. Although some of these zooids resemble
polyps or medusae in shape, they lack features that are not relevant to their specific functions, for
example the swimming "medusae" have no digestive, sensory or reproductive cells. The best-known
siphonophore is the Portuguese Man o' War (Physalia physalis).[10][26][27]

[edit] Skeletons

In medusae the only supporting structure is the mesoglea. Hydra and most sea anemones close their
mouths when they are not feeding, and the water in the digestive cavity then acts as a hydrostatic
skeleton, rather like a water-filled balloon. Other polyps such as Tubularia use columns of water-filled
cells for support. Sea pens stiffen the mesoglea with calcium carbonate spicules and tough fibrous
proteins, rather like sponges.[18]

In some colonial polyps a chitinous periderm gives support and some protection to the connecting
sections and to the lower parts of individual polyps. Stony corals secrete massive calcium carbonate
exoskeletons. A few polyps collect materials such as sand grains and shell fragments, which they attach to
their outsides. Some colonial sea anemones stiffen the mesoglea with sediment particles.[18]

[edit] Locomotion

Chrysaora quinquecirrha ("sea nettle") swimming

Medusae swim by a form of jet propulsion: muscles, especially inside the rim of the bell, squeeze water
out of the cavity inside the bell, and the springiness of the mesoglea powers the recovery stroke. Since the
tissue layers are very thin, they provide too little power to swim against currents and just enough to
control movement within currents.[18]

58
Hydras and some sea anemones can move slowly over rocks and sea or stream beds by various means:
creeping like snails, crawling like inchworms, or by somersaulting. A few can swim clumsily by
waggling their bases.[18]

[edit] Nervous system and senses

Cnidaria have no brains or even central nervous systems. Instead they have decentralized nerve nets
consisting of : sensory neurons that generate signals in response to various types of stimulus, such as
odors; motor neurons that tell muscles to contract; all connected by "cobwebs" of intermediate neurons.
As well as forming the "signal cables", intermediate neurons also form ganglia that act as local
coordination centers. The cilia of the cnidocytes detect physical contact. Nerves inform cnidocytes when
odors from prey or attackers are detected and when neighbouring cnidocytes fire. Most of the
communications between nerve cells are via chemical synapses, small gaps across which chemicals flow.
As this process is too slow to ensure that the muscles round the rim of a medusa's bell contract
simultaneously in swimming the neurons which control this communicate by much faster electrical
signals across gap junctions.[18]

Medusae and complex swimming colonies such as siphonophores and chondrophores sense tilt and
acceleration by means of statocysts, chambers lined with hairs which detect the movements of internal
mineral grains called statoliths. If the body tilts in the wrong direction, the animal rights itself by
increasing the strength of the swimming movements on the side that is too low. They also have ocelli
("little eyes"), which can detect the direction from which light is coming. Box jellies have camera eyes,
although these probably do not form images, and their lenses simply produce a clearer indication of the
direction from which light is coming.[4]

[edit] Feeding and excretion

Cnidarians feed in several ways: predation, absorbing dissolved organic chemicals, filtering food particles
out of the water, and obtaining nutrients from symbiotic algae within their cells. Most obtain the majority
of their food from predation but some, including the corals Hetroxenia and Leptogorgia, depend almost
completely on their endosymbionts and on absorbing dissolved nutrients.[4] Cnidaria give their symbiotic
algae carbon dioxide, some nutrients and a place in the sun.[18]

Predatory species use their cnidocytes to poison or entangle prey, and those with venomous nematocysts
may start digestion by injecting digestive enzymes. The "smell" of fluids from wounded prey makes the
tentacles fold inwards and wipe the prey off into the mouth. In medusae the tentacles round the edge of
the bell are often short and most of the prey capture is done by "oral arms", which are extensions of the
edge of the mouth and are often frilled and sometimes branched to increase their surface area. Medusae
often trap prey or suspended food particles by swimming upwards, spreading their tentacles and oral arms
and then sinking. In species for which suspended food particles are important, the tentacles and oral arms
often have rows of cilia whose beating creates currents that flow towards the mouth, and some produce
nets of mucus to trap particles.[4]

Once the food is in the digestive cavity, gland cells in the gastroderm release enzymes that reduce the
prey to slurry, usually within a few hours. This circulates through the digestive cavity and, in colonial
cnidarians, through the connecting tunnels, so that gastroderm cells can absorb the nutrients. Absorption
may take a few hours, and digestion within the cells may take a few days. The circulation of nutrients is
driven by water currents produced by cilia in the gastroderm or by muscular movements or both, so that
nutrients reach all parts of the digestive cavity.[18] Nutrients reach the outer cell layer by diffusion or, for

59
animals or zooids such as medusae which have thick mesogleas, are transported by mobile cells in the
mesoglea.[4]

Indigestible remains of prey are expelled through the mouth. The main waste product of cells' internal
processes is ammonia, which is removed by the external and internal water currents.[18]

[edit] Respiration

There are no respiratory organs, and both cell layers absorb oxygen from and expel carbon dioxide into
the surrounding water. When the water in the digestive cavity becomes stale it must be replaced, and
nutrients that have not been absorbed will be expelled with it. Some Anthozoa have ciliated grooves on
their tentacles, allowing them to pump water out of and into the digestive cavity without opening the
mouth. This improves respiration after feeding and allows these animals, which use the cavity as a
hydrostatic skeleton, to control the water pressure in the cavity without expelling undigested food.[4]

Cnidaria that carry photosynthetic symbionts may have the opposite problem, an excess of oxygen, which
may prove toxic. The animals produce large quantities of antioxidants to neutralize the excess oxygen.[4]

[edit] Regeneration

All cnidarians can regenerate, allowing them to recover from injury and to reproduce asexually. Medusae
have limited ability to regenerate, but polyps can do so from small pieces or even collections of separated
cells. This enables corals to recover even after apparently being destroyed by predators.[4]

[edit] Reproduction

60
Life cycle of a jellyfish:[4][18]
1–3 Larva searches for site
4–8 Polyp grows
9–11 Polyp strobilates
12–14 Medusa grows

[edit] Sexual

In the Cnidaria sexual reproduction often involves a complex life cycle with both polyp and medusa
stages. For example in Scyphozoa (jellyfish) and Cubozoa (box jellies) a larva swims until it finds a good
site, and then becomes a polyp. This grows normally but then absorbs its tentacles and splits horizontally
into a series of disks that become juvenile medusae, a process called strobilation. The juveniles swim off
and slowly grow to maturity, while the polyp re-grows and may continue strobilating periodically. The
adults have gonads in the gastroderm, and these release ova and sperm into the water in the breeding
season.[4][18]

Shortened forms of this life cycle are common, for example some oceanic scyphozoans omit the polyp
stage completely, and cubozoan polyps produce only one medusa. Hydrozoa have a variety of life cycles.
Some have no polyp stages and some (e.g. hydra) have no medusae. In some species the medusae remain
attached to the polyp and are responsible for sexual reproduction; in extreme cases these reproductive
zooids may not look much like medusae. Anthozoa have no medusa stage at all and the polyps are
responsible for sexual reproduction.[4]

Spawning is generally driven by environmental factors such as changes in the water temperature, and
their release is triggered by lighting conditions such as sunrise, sunset or the phase of the moon. Many
species of Cnidaria may spawn simultaneously in the same location, so that there are too many ova and
sperm for predators to eat more than a tiny percentage — one famous example is the Great Barrier Reef,
where at least 110 corals and a few non-cnidarian invertebrates produce enough to turn the water cloudy.
These mass spawnings may produce hybrids, some of which can settle and form polyps, but it is not
known how long these can survive. In some species the ova release chemicals that attract sperm of the
same species.[4]

61
The fertilized eggs develop into larvae by dividing until there are enough cells to form a hollow sphere
(blastula) and then a depression forms at one end (gastrulation) and eventually become the digestive
cavity. However in cnidarians the depression forms at the end further from the yolk (at the animal pole),
while in bilaterians it forms at the other end (vegetal pole).[18] The larvae, called planulae, swim or crawl
by means of cilia.[4] They are cigar-shaped but slightly broader at the "front" end, which is the aboral,
vegetal-pole end and eventually attaches to a substrate if the species has a polyp stage.[18]

Anthozoan larvae either have large yolks or are capable of feeding on plankton, and some already have
endosymbiotic algae that help to feed them. Since the parents are immobile, these feeding capabilities
extend the larvae's range and avoid overcrowding of sites. Scyphozoan and hydrozoan larvae have little
yolk and most lack endosymbiotic algae, and therefore have to settle quickly and metamorphose into
polyps. Instead these species rely on their medusae to extend their ranges.[18]

[edit] Asexual

All known cnidaria can reproduce asexually by various means, in addition to regenerating after being
fragmented. Hydrozoan polyps only bud, while the medusae of some hydrozoans can divide down the
middle. Scyphozoan polyps can both bud and split down the middle. In addition to both of these methods,
Anthozoa can split horizontally just above the base.[4][18]

[edit] Evolutionary history

[edit] Fossil record

The fossil coral Cladocora from Pliocene rocks in Cyprus

The earliest widely accepted animal fossils are rather modern-looking cnidarians, possibly from around
580 million years ago, although fossils from the Doushantuo Formation can only be dated
approximately.[28] The identification of some of these as embryos of animals has been contested, but other
fossils from these rocks strongly resemble tubes and other mineralized structures made by corals.[29] Their
presence implies that the cnidarian and bilaterian lineages had already diverged.[30] Although the
Ediacaran fossil Charnia used to be classified as a jellyfish or sea pen,[31] more recent study of growth
patterns in Charnia and modern cnidarians has cast doubt on this hypothesis,[32][33] and there are now no
bona-fide cnidarian body fossils in the Ediacaran. Few fossils of cnidarians without mineralized skeletons
are known from more recent rocks, except in lagerstätten that preserved soft-bodied animals.[34]

A few mineralized fossils that resemble corals have been found in rocks from the Cambrian period, and
corals diversified in the Early Ordovician.[34] These corals, which were wiped out in the Permian-Triassic
extinction about 251 million years ago,[34] did not dominate reef construction since sponges and algae also
played a major part.[35] During the Mesozoic era rudist bivalves were the main reef-builders, but they were

62
wiped out in the Cretaceous-Tertiary extinction 65 million years ago,[36] and since then the main reef-
builders have been scleractinian corals.[34]

Interaction with humans

Jellyfish stings killed about 1,500 people in the 20th century,[45] and cubozoans are particularly
dangerous. On the other hand, some large jellyfish are considered a delicacy in eastern and southern Asia.
Coral reefs have long been economically important as providers of fishing grounds, protectors of shore
buildings against currents and tides, and more recently as centers of tourism. However, they are
vulnerable to over-fishing, mining for construction materials, pollution, and damage caused by tourism.

Beaches protected from tides and storms by coral reefs are often the best places for housing in tropical
countries. Reefs are an important food source for low-technology fishing, both on the reefs themselves
and in the adjacent seas.[46] However despite their great productivity reefs are vulnerable to over-fishing,
because much of the organic carbon they produce is exhaled as carbon dioxide by organisms at the middle
levels of the food chain and never reaches the larger species that are of interest to fishermen. [16] Tourism
centered on reefs provides much of the income of some tropical islands, attracting photographers, divers
and sports fishermen. However human activities damage reefs in several ways: mining for construction
materials; pollution, including large influxes of fresh water from storm drains; commercial fishing,
including the use of dynamite to stun fish and the capture of young fish for aquariums; and tourist damage
caused by boat anchors and the cumulative effect of walking on the reefs. [46] Coral, mainly from the
Pacific Ocean has long been used in jewellery, and demand rose sharply in the 1980s.[47]

The dangerous "sea wasp" Chironex fleckeri

Some large jellyfish species have been used in Chinese cuisine at least since 200 AD, and are now fished
in the seas around most of South East Asia. Japan is the largest single consumer of edible jellyfish,
importing at first only from China but now from all of South East Asia as prices rose in the 1970s. This
fishing industry is restricted to daylight hours and calm conditions in two short seasons, from March to
May and August to November.[48] The commercial value of jellyfish food products depends on the skill
with which they are prepared, and "Jellyfish Masters" guard their trade secrets carefully. Jellyfish is very
low in cholesterol and sugars, but cheap preparation can introduce undesirable amounts of heavy
metals.[49]

The "sea wasp" Chironex fleckeri has been described as the world's most venomous animal and is held
responsible for 67 deaths, although it is difficult to identify the animal as it is almost transparent. Most
stingings by C. fleckeri cause only mild symptoms.[50] Seven other box jellies can cause a set of symptoms
called Irukandji syndrome,[51] which takes about 30 minutes to develop,[52] and from a few hours to
two weeks to disappear.[53] Hospital treatment is usually required, and there have been a few deaths.[51]

63
MICROBIAL DISEASES
Anthrax

From Wikipedia, the free encyclopedia

Jump to: navigation, search

For other uses, see Anthrax (disambiguation).

Anthrax

Classification and external resources

Microphotograph of a Gram stain of the bacterium


Bacillus anthracis, the cause of the anthrax disease

ICD-10 A22.minor

ICD-9 022

DiseasesDB 1203

MedlinePlus 001325

eMedicine med/148

MeSH D000881

64
Anthrax is an acute disease caused by the bacterium Bacillus anthracis. Most forms of the disease are
lethal, and it affects both humans and other animals. There are effective vaccines against anthrax, and
some forms of the disease respond well to antibiotic treatment.

Like many other members of the genus Bacillus, Bacillus anthracis can form dormant endospores (often
referred to as "spores" for short, but not to be confused with fungal spores) that are able to survive in
harsh conditions for decades or even centuries.[1] Such spores can be found on all continents, even
Antarctica.[2] When spores are inhaled, ingested, or come into contact with a skin lesion on a host they
may reactivate and multiply rapidly.

Anthrax commonly infects wild and domesticated herbivorous mammals that ingest or inhale the spores
while grazing. Ingestion is thought to be the most common route by which herbivores contract anthrax.
Carnivores living in the same environment may become infected by consuming infected animals.
Diseased animals can spread anthrax to humans, either by direct contact (e.g., inoculation of infected
blood to broken skin) or by consumption of a diseased animal's flesh.

Anthrax spores can be produced in vitro and used as a biological weapon. Anthrax does not spread
directly from one infected animal or person to another; it is spread by spores. These spores can be
transported by clothing or shoes. The dead body of an animal that died of anthrax can also be a source of
anthrax spores.

Contents

[hide]

 1 Overview
 2 Cause
o 2.1 Bacteria
o 2.2 Exposure
o 2.3 Mode of infection
 2.3.1 Pulmonary
 2.3.2 Gastrointestinal
 2.3.3 Cutaneous
 3 Diagnosis
 4 Prevention
o 4.1 Vaccines
 5 Treatment
 6 History
o 6.1 Etymology
o 6.2 Discovery
o 6.3 First vaccination
 7 Society and culture
o 7.1 Site cleanup
o 7.2 Biological warfare
 8 See also
 9 Notes
 10 External links

65
[edit] Overview

Color-enhanced scanning electron micrograph shows splenic tissue from a monkey with inhalational
anthrax; featured are rod-shaped bacilli (yellow) and an erythrocyte (red).

Until the twentieth century, anthrax infections killed hundreds and thousands of animals and people each
year in Australia, Asia, Africa, North America, and Europe, to be specific, in the concentration camps
during WWII.[3] French scientist Louis Pasteur developed the first effective vaccine for anthrax in
1881.[4][5][6] Thanks to over a century of animal vaccination programs, sterilization of raw animal waste
materials and anthrax eradication programs in North America, Australia, New Zealand, Russia, Europe,
and parts of Africa and Asia, anthrax infection is now relatively rare in domestic animals, with only a few
dozen cases reported every year. Anthrax is even rarer in dogs and cats: There had ever been only one
documented case in dogs in the USA by 2001, although the disease affects livestock. [7] Anthrax typically
does not cause disease in carnivores and scavengers, even when these animals consume anthrax-infected
carcasses. Anthrax outbreaks do occur in some wild animal populations with some regularity. [8] The
disease is more common in developing countries without widespread veterinary or human public health
programs.

Bacillus anthracis bacterial spores are soil-borne, and, because of their long lifetime, they are still present
globally and at animal burial sites of anthrax-killed animals for many decades; spores have been known to
have reinfected animals over 70 years after burial sites of anthrax-infected animals were disturbed.[9]

The virulent Ames strain, which had been used in the 2001 anthrax attacks in the United States, has
received the most news coverage of any anthrax outbreak. The Ames strain contains two virulence
plasmids, which separately encode for a three-protein toxin, called anthrax toxin, and a poly-glutamic
acid capsule. Nonetheless, the Vollum strain, developed but never used as a biological weapon during the
Second World War, is much more dangerous. The Vollum (also incorrectly referred to as Vellum) strain
was isolated in 1935 from a cow in Oxfordshire, UK. This is the same strain that was used during the
Gruinard bioweapons trials. A variation of Vollum known as "Vollum 1B" was used during the 1960s in
the US and UK bioweapon programs. Vollum 1B is widely believed[10] to have been isolated from
William A. Boyles, a 46-year-old scientist at the U.S. Army Biological Warfare Laboratories at Camp
(later Fort) Detrick (precursor to USAMRIID) who died in 1951 after being accidentally infected with the
Vollum strain. The Sterne strain, named after the Trieste-born immunologist Max Sterne, is an attenuated
strain used as a vaccine, which contains only the anthrax toxin virulence plasmid and not the poly-
glutamic acid capsule expressing plasmid.

[edit] Cause

66
[edit] Bacteria

Gram-positive anthrax bacteria (purple rods) in cerebrospinal fluid sample. If present, a Gram-negative
bacterial species would appear pink. (The other cells are white blood cells).

Main article: Bacillus anthracis

Bacillus anthracis is a rod-shaped, Gram-positive, aerobic bacterium about 1 by 9 micrometers in length.


It was shown to cause disease by Robert Koch in 1876.[11] The bacterium normally rests in endospore
form in the soil, and can survive for decades in this state. Herbivores are often infected whilst grazing or
browsing, especially when eating rough, irritant or spiky vegetation: the vegetation has been hypothesized
to cause wounds within the gastrointestinal tract permitting entry of the bacterial endo-spores into the
tissues, though this has not been proven. Once ingested or placed in an open cut, the bacterium begins
multiplying inside the animal or human and typically kills the host within a few days or weeks. The endo-
spores germinate at the site of entry into the tissues and then spread via the circulation to the lymphatics,
where the bacteria multiply.

It is the production of two powerful exo-toxins and lethal toxin by the bacteria that causes death.
Veterinarians can often tell a possible anthrax-induced death by its sudden occurrence, and by the dark,
non-clotting blood that oozes from the body orifices. Most anthrax bacteria inside the body after death are
out-competed and destroyed by anaerobic bacteria within minutes to hours postmortem. However, anthrax
vegetative bacteria that escape the body via oozing blood or through the opening of the carcass may form
hardy spores. One spore forms per one vegetative bacterium. The triggers for spore formation are not yet
known, though oxygen tension and lack of nutrients may play roles. Once formed, these spores are very
hard to eradicate.

The infection of herbivores (and occasionally humans) via the inhalational route normally proceeds as
follows: Once the spores are inhaled, they are transported through the air passages into the tiny air
particles sacs (alveoli) in the lungs. The spores are then picked up by scavenger cells (macrophages) in
the lungs and are transported through small vessels (lymphatics) to the lymph nodes in the central chest
cavity (mediastinum). Damage caused by the anthrax spores and bacilli to the central chest cavity can
cause chest pain and difficulty breathing. Once in the lymph nodes, the spores germinate into active
bacilli that multiply and eventually burst the macrophages, releasing many more bacilli into the
bloodstream to be transferred to the entire body. Once in the blood stream, these bacilli release three
proteins named lethal factor, edema factor, and protective antigen. All three are non-toxic by themselves,
but the combination is incredibly lethal to humans.[12] Protective antigen combines with these other two
factors to form lethal toxin and edema toxin, respectively. These toxins are the primary agents of tissue
destruction, bleeding, and death of the host. If antibiotics are administered too late, even if the antibiotics

67
eradicate the bacteria, some hosts will still die of toxemia. This is because the toxins produced by the
bacilli remain in their system at lethal dose levels.

The lethality of the anthrax disease owes itself to the bacterium's two principal virulence factors: (i) the
poly-D-glutamic acid capsule, which protects the bacterium from phagocytosis by host neutrophils, and
(ii) the tripartite protein toxin, called anthrax toxin. Anthrax toxin is a mixture of three protein
components: (i) protective antigen (PA), (ii) edema factor (EF), and (iii) lethal factor (LF). PA plus LF
produces lethal toxin, and PA plus EF produces edema toxin. These toxins cause death and tissue
swelling (edema), respectively.

In order to enter the cells, the edema and lethal factors use another protein produced by B. anthracis
called protective antigen. Protective antigen binds to two surface receptors on the host cell. A cell
protease then cleaves PA into two fragments: PA20 and PA63. PA20 dissociates into the extracellular
medium, playing no further role in the toxic cycle. PA63 then oligomerizes with six other PA63 fragments
forming a heptameric ring-shaped structure named a prepore. Once in this shape, the complex can
competitively bind up to three EF or LF forming a resistant complex. [12] Receptor-mediated endocytosis
occurs next, providing the newly formed toxic complex access to the interior of the host cell. The
acidified environment within the endosome triggers the heptamer to release the LF and/or EF into the
cytosol.[13] It is unknown how exactly the complex results in the death of the cell.

Edema factor is a calmodulin-dependent adenylate cyclase. Adenylate cyclase catalyzes the conversion of
ATP into cyclic AMP (cAMP) and pyrophosphate. The complexation of adenylate cyclase with
calmodulin removes calmodulin from stimulating calcium-triggered signaling, thus inhibiting the immune
response.[12] To be specific, LF inactivates neutrophils (a type of phagocytic cell) by the process just
described so that they cannot phagocytose bacteria. Throughout history, it was believed that lethal factor
caused macrophages to make TNF-alpha and interleukin 1, beta (IL1B). TNF-alpha is a cytokine whose
primary role is to regulate immune cells as well as to induce inflammation and apoptosis or programmed
cell death. Interleukin 1, beta is another cytokine that also regulates inflammation and apoptosis. The
over-production of TNF-alpha and IL1B ultimately leads to septic shock and death. However, recent
evidence indicates that anthrax also targets endothelial cells (cells that line serous cavities such as the
pericardial cavity, pleural cavity, and the peritoneal cavity, lymph vessels, and blood vessels), causing
vascular leakage of fluid and cells, and ultimately hypovolemic shock (low blood volume), and septic
shock.

[edit] Exposure

Occupational exposure to infected animals or their products (such as skin, wool, and meat) is the usual
pathway of exposure for humans. Workers who are exposed to dead animals and animal products are at
the highest risk, especially in countries where anthrax is more common. Anthrax in livestock grazing on
open range where they mix with wild animals still occasionally occurs in the United States and elsewhere.
Many workers who deal with wool and animal hides are routinely exposed to low levels of anthrax spores
but most exposures are not sufficient to develop anthrax infections. It is presumed that the body's natural
defenses can destroy low levels of exposure. These people usually contract cutaneous anthrax if they
catch anything. Throughout history, the most dangerous form of inhalational anthrax was called
Woolsorters' disease because it was an occupational hazard for people who sorted wool. Today this form
of infection is extremely rare, as almost no infected animals remain. The last fatal case of natural
inhalational anthrax in the United States occurred in California in 1976, when a home weaver died after
working with infected wool imported from Pakistan. The autopsy was done at UCLA hospital. To
minimize the chance of spreading the disease, the deceased was transported to UCLA in a sealed plastic
body bag within a sealed metal container.[14]

68
In November 2008, a drum maker in the United Kingdom who worked with untreated animal skins died
from anthrax.[15] Gastrointestinal anthrax is exceedingly rare in the United States, with only one case on
record, reported in 1942, according to the Centers for Disease Control and Prevention.[16] In December
2009 an outbreak of anthrax occurred amongst heroin addicts in Glasgow, Scotland, resulting in ten
deaths.[17] The source of the anthrax is believed to be dilution of the heroin with bone meal in
Afghanistan.[18]

Also during December 2009, The New Hampshire Department of Health and Human Services confirmed
a case of gastrointestinal anthrax in an adult female. The CDC investigated the source and the possibility
that it was contracted from an African drum recently used by the woman taking part in a drumming
circle.[19] The woman apparently inhaled anthrax [in spore form] from the hide of the drum. She became
critically ill, but with gastrointestinal anthrax rather than inhaled anthrax, which made her unique in
American medical history. The building where the infection took place was cleaned and reopened to the
public and the woman recovered. Jodie Dionne-Odom, New Hampshire state epidemiologist, states, "It is
a mystery. We really don't know why it happened."[20] Gastrointestinal anthrax is exceedingly rare in the
United States, with only one case on record, reported in 1942, according to the Centers for Disease
Control and Prevention.[16]

[edit] Mode of infection

Inhalational anthrax, mediastinal widening

Anthrax can enter the human body through the intestines (ingestion), lungs (inhalation), or skin
(cutaneous) and causes distinct clinical symptoms based on its site of entry. In general, an infected
human will be quarantined. However, anthrax does not usually spread from an infected human to a
noninfected human. But, if the disease is fatal to the person's body, its mass of anthrax bacilli becomes a
potential source of infection to others and special precautions should be used to prevent further
contamination. Inhalational anthrax, if left untreated until obvious symptoms occur, may be fatal.

Anthrax can be contracted in laboratory accidents or by handling infected animals or their wool or hides.
It has also been used in biological warfare agents and by terrorists to intentionally infect as exemplified
by the 2001 anthrax attacks.

69
[edit] Pulmonary

Respiratory infection in humans initially presents with cold or flu-like symptoms for several days,
followed by severe (and often fatal) respiratory collapse. Historical mortality was 92%, but, when treated
early (seen in the 2001 anthrax attacks), observed mortality was 45%.[21] Distinguishing pulmonary
anthrax from more common causes of respiratory illness is essential to avoiding delays in diagnosis and
thereby improving outcomes. An algorithm for this purpose has been developed.[22] Illness progressing to
the fulminant phase has a 97% mortality regardless of treatment.

A lethal infection is reported to result from inhalation of about 10,000–20,000 spores, though this dose
varies among host species.[23] As with all diseases, it is presumed that there is a wide variation to
susceptibility with evidence that some people may die from much lower exposures; there is little
documented evidence to verify the exact or average number of spores needed for infection. Inhalational
anthrax is also known as Woolsorters' or Ragpickers' disease as these professions were more susceptible
to the disease due to their exposure to infected animal products. Other practices associated with exposure
include the slicing up of animal horns for the manufacture of buttons, the handling of hair bristles used for
the manufacturing of brushes, and the handling of animal skins. Whether these animal skins came from
animals that died of the disease or from animals that had simply laid on ground that had spores on it is
unknown. This mode of infection is used as a bioweapon.

[edit] Gastrointestinal

Gastrointestinal infection in humans is most often caused by eating anthrax-infected meat and is
characterized by serious gastrointestinal difficulty, vomiting of blood, severe diarrhea, acute inflammation
of the intestinal tract, and loss of appetite. Some lesions have been found in the intestines and in the
mouth and throat. After the bacterium invades the bowel system, it spreads through the bloodstream
throughout the body, making even more toxins on the way. Gastrointestinal infections can be treated but
usually result in fatality rates of 25% to 60%, depending upon how soon treatment commences. [24] This
form of anthrax is the rarest form. In the United States, there is only one official case reported in 1942 by
the CDC.[16]

[edit] Cutaneous

Anthrax skin lesion

Cutaneous (on the skin) anthrax infection in humans shows up as a boil-like skin lesion that eventually
forms an ulcer with a black center (eschar). The black eschar often shows up as a large, painless necrotic
ulcer (beginning as an irritating and itchy skin lesion or blister that is dark and usually concentrated as a
black dot, somewhat resembling bread mold) at the site of infection. In general, cutaneous infections form

70
within the site of spore penetration between 2 and 5 days after exposure. Unlike bruises or most other
lesions, cutaneous anthrax infections normally do not cause pain.[24]

Cutaneous anthrax is typically caused when bacillus anthracis spores enter through cuts on the skin. This
form of Anthrax is found most commonly when humans handle infected animals and/or animal products
(i.e., the hide of an animal used to make drums).

Cutaneous anthrax is rarely fatal if treated,[21] because the infection area is limited to the skin, preventing
the Lethal Factor, Edema Factor, and Protective Antigen from entering and destroying a vital organ.
Without treatment about 20% of cutaneous skin infection cases progress to toxemia and death.

Treatment typically includes antibiotic therapy. Specific guidelines are available for adults, children,
pregnant women, and immunocompromised persons. The differential diagnosis includes multiple entities
and thus accurate diagnosis is imperative. Clinical examination coupled with culture and cutaneous
biopsy can aid in accurate diagnosis.

[edit] Diagnosis

Other than Gram Stain of specimens, there are no specific direct identification techniques for
identification of Bacillus species in clinical material. These organisms are Gram-positive but with age can
be Gram-variable to Gram-negative. A specific feature of Bacillus species that makes it unique from other
aerobic microorganisms is its ability to produce spores. Although spores are not always evident on a
Gram stain of this organism, the presence of spores confirms that the organism is of the genus Bacillus.

All Bacillus species grow well on 5% Sheep blood agar and other routine culture media. PLET
(polymyxin-lysozyme-EDTA-thallous acetate) can be used to isolate B.anthracis from contaminated
specimens, and bicarbonate agar is used as an identification method to induce capsule formation.

Bacillus sp. will usually grow within 24 hours of incubation at 35 degrees C, in ambient air (room
temperature) or in 5% CO2. If bicarbonate agar is used for identification then the media must be incubated
in 5% CO2.

B.anthracis appears as medium-large, gray, flat, irregular with swirling projections, often referred to as
"medusa head" appearance, and is non-hemolytic on 5% sheep blood agar. It is non-motile, is susceptible
to penicillin and produces a wide zone of lecithinase on egg yolk agar. Confirmatory testing to identify
B.anthracis includes gamma bacteriophage testing, indirect hemagglutination and enzyme linked
immunosorbent assay to detect antibodies. [25]

[edit] Prevention

[edit] Vaccines

An anthrax vaccine licensed by the U.S. Food and Drug Administration (FDA) and produced from one
non-virulent strain of the anthrax bacterium, is manufactured by BioPort Corporation, subsidiary of
Emergent BioSolutions. The trade name is BioThrax, although it is commonly called Anthrax Vaccine
Adsorbed (AVA). It was formerly administered in a six-dose primary series at 0, 2, 4 weeks and 6, 12, 18
months, with annual boosters to maintain immunity. On December 11, 2008, the FDA approved the
removal of the 2-week dose, resulting in the currently recommended five-dose series.[26]

71
Unlike NATO countries, the Soviets developed and used a live spore anthrax vaccine, known as the STI
vaccine, produced in Tbilisi, Georgia. Its serious side-effects restrict use to healthy adults.[27]

[edit] Treatment

Anthrax cannot be spread directly from person to person, but a patient's clothing and body may be
contaminated with anthrax spores. Effective decontamination of people can be accomplished by a
thorough wash-down with antimicrobial effective soap and water. Waste water should be treated with
bleach or other anti-microbial agent. Effective decontamination of articles can be accomplished by boiling
contaminated articles in water for 30 minutes or longer. Chlorine bleach is ineffective in destroying
spores and vegetative cells on surfaces, though formaldehyde is effective. Burning clothing is very
effective in destroying spores. After decontamination, there is no need to immunize, treat or isolate
contacts of persons ill with anthrax unless they were also exposed to the same source of infection. Early
antibiotic treatment of anthrax is essential—delay significantly lessens chances for survival. Treatment
for anthrax infection and other bacterial infections includes large doses of intravenous and oral
antibiotics, such as fluoroquinolones, like ciprofloxacin (cipro), doxycycline, erythromycin, vancomycin
or penicillin. In possible cases of inhalation anthrax, early antibiotic prophylaxis treatment is crucial to
prevent possible death. In May 2009, Human Genome Sciences submitted a Biologic License Application
(BLA, permission to market) for its new drug, raxibacumab (brand name ABthrax) intended for
emergency treatment of inhaled anthrax.[28] If death occurs from anthrax the body should be isolated to
prevent possible spread of anthrax germs. Burial does not kill anthrax spores.

If a person is suspected as having died from anthrax, every precaution should be taken to avoid skin
contact with the potentially contaminated body and fluids exuded through natural body openings. The
body should be put in strict quarantine. A blood sample taken in a sealed container and analyzed in an
approved laboratory should be used to ascertain if anthrax is the cause of death. Microscopic visualization
of the encapsulated bacilli, usually in very large numbers, in a blood smear stained with polychrome
methylene blue (McFadyean stain) is fully diagnostic, though culture of the organism is still the gold
standard for diagnosis. Full isolation of the body is important to prevent possible contamination of others.
Protective, impermeable clothing and equipment such as rubber gloves, rubber apron, and rubber boots
with no perforations should be used when handling the body. No skin, especially if it has any wounds or
scratches, should be exposed. Disposable personal protective equipment is preferable, but if not available,
decontamination can be achieved by autoclaving. Disposable personal protective equipment and filters
should be autoclaved, and/or burned and buried. Bacillus anthracis bacillii range from 0.5–5.0 μm in size.
Anyone working with anthrax in a suspected or confirmed victim should wear respiratory equipment
capable of filtering this size of particle or smaller. The US National Institute for Occupational Safety and
Health (NIOSH) and Mine Safety and Health Administration (MSHA) approved high efficiency-
respirator, such as a half-face disposable respirator with a high-efficiency particulate air (HEPA) filter, is
recommended.[29] All possibly contaminated bedding or clothing should be isolated in double plastic bags
and treated as possible bio-hazard waste. The victim should be sealed in an airtight body bag. Dead
victims that are opened and not burned provide an ideal source of anthrax spores. Cremating victims is the
preferred way of handling body disposal. No embalming or autopsy should be attempted without a fully
equipped biohazard laboratory and trained and knowledgeable personnel.

Delays of only a few days may make the disease untreatable and treatment should be started even without
symptoms if possible contamination or exposure is suspected. Animals with anthrax often just die without
any apparent symptoms. Initial symptoms may resemble a common cold—sore throat, mild fever, muscle
aches and malaise. After a few days, the symptoms may progress to severe breathing problems and shock
and ultimately death. Death can occur from about two days to a month after exposure with deaths
apparently peaking at about 8 days after exposure.[30] Antibiotic-resistant strains of anthrax are known.

72
In recent years there have been many attempts to develop new drugs against anthrax, but existing drugs
are effective if treatment is started soon enough.

Early detection of sources of anthrax infection can allow preventive measures to be taken. In response to
the anthrax attacks of October 2001 the United States Postal Service (USPS) installed BioDetection
Systems (BDS) in their large scale mail cancellation facilities. BDS response plans were formulated by
the USPS in conjunction with local responders including fire, police, hospitals and public health.
Employees of these facilities have been educated about anthrax, response actions and prophylactic
medication. Because of the time delay inherent in getting final verification that anthrax has been used,
prophylactic antibiotic treatment of possibly exposed personnel must be started as soon as possible.

[edit] History

[edit] Etymology

The name comes from anthrax [άνθραξ], the Greek word for 'coal', because of the black skin lesions
developed by victims with a cutaneous anthrax infection.

[edit] Discovery

Robert Koch, a German physician and scientist, first identified the bacterium that caused the anthrax
disease in 1875.[11][31] His pioneering work in the late nineteenth century was one of the first
demonstrations that diseases could be caused by microbes. In a groundbreaking series of experiments, he
uncovered the life cycle and means of transmission of anthrax. His experiments not only helped create an
understanding of anthrax, but also helped elucidate the role of microbes in causing illness at a time when
debates were still held over spontaneous generation versus cell theory. Koch went on to study the
mechanisms of other diseases and was awarded the 1905 Nobel Prize in Physiology or Medicine for his
discovery of the bacterium causing tuberculosis. Koch is today recognized as one of history's most
important biologists and a founder of modern bacteriology.

[edit] First vaccination

In May 1881 Louis Pasteur performed a public experiment to demonstrate his concept of vaccination. He
prepared two groups of 25 sheep, one goat and several cows. The animals of one group were injected with
an anti-anthrax vaccine prepared by Pasteur twice, at an interval of 15 days; the control group was left
unvaccinated. Thirty days after the first injection, both groups were injected with a culture of live anthrax
bacteria. All the animals in the non-vaccinated group died, while all of the animals in the vaccinated
group survived.[32] The human vaccine for anthrax became available in 1954. This was a cell-free vaccine
instead of the live-cell Pasteur-style vaccine used for veterinary purposes. An improved cell-free vaccine
became available in 1970.[33]

[edit] Society and culture

[edit] Site cleanup

Anthrax spores can survive for long periods of time in the environment after release. Methods for
cleaning anthrax-contaminated sites commonly use oxidizing agents such as peroxides, ethylene oxide,
Sandia Foam,[34] chlorine dioxide (used in Hart Senate office building), and liquid bleach products
containing sodium hypochlorite. These agents slowly destroy bacterial spores. A bleach solution for
treating hard surfaces has been approved by the EPA.[35] Bleach and vinegar must not be combined

73
together directly, as doing so could produce chlorine gas. Rather some water must first be added to the
bleach (e.g., two cups water to one cup of bleach), then vinegar (e.g., one cup), and then the rest of the
water (e.g., six cups). The pH of the solution should be tested with a paper test strip; and treated surfaces
must remain in contact with the bleach solution for 60 minutes (repeated applications will be necessary to
keep the surfaces wet).

Chlorine dioxide has emerged as the preferred biocide against anthrax-contaminated sites, having been
employed in the treatment of numerous government buildings over the past decade. Its chief drawback is
the need for in situ processes to have the reactant on demand.

To speed the process, trace amounts of a non-toxic catalyst composed of iron and tetro-amido
macrocyclic ligands are combined with sodium carbonate and bicarbonate and converted into a spray. The
spray formula is applied to an infested area and is followed by another spray containing tert-Butyl
hydroperoxide.[36]

Using the catalyst method, a complete destruction of all anthrax spores can be achieved in under 30
minutes.[36] A standard catalyst-free spray destroys fewer than half the spores in the same amount of time.
They can be heated, exposed to the harshest chemicals, and they do not easily die.

Cleanups at a Senate office building, several contaminated postal facilities and other U.S. government and
private office buildings showed that decontamination is possible, but it is time-consuming and costly.
Clearing the Senate office building of anthrax spores cost $27 million, according to the Government
Accountability Office. Cleaning the Brentwood postal facility outside Washington cost $130 million and
took 26 months. Since then newer and less costly methods have been developed.[37]

Clean up of anthrax-contaminated areas on ranches and in the wild is much more problematic. Carcasses
may be burned, though it often takes up to three days to burn a large carcass and this is not feasible in
areas with little wood. Carcasses may also be buried, though the burying of large animals deeply enough
to prevent resurfacing of spores requires much manpower and expensive tools. Carcasses have been
soaked in formaldehyde to kill spores, though this has environmental contamination issues. Block burning
of vegetation in large areas enclosing an anthrax outbreak has been tried; this, while environmentally
destructive, causes healthy animals to move away from an area with carcasses in search of fresh graze and
browse. Some wildlife workers have experimented with covering fresh anthrax carcasses with shadecloth
and heavy objects. This prevents some scavengers from opening the carcasses, thus allowing the
putrefactive bacteria within the carcass to kill the vegetative B. anthracis cells and preventing sporulation.
This method also has drawbacks, as scavengers such as hyenas are capable of infiltrating almost any
exclosure. The occurrence of previously dormant anthrax, stirred up from below the ground surface by
wind movement in a drought-stricken region with depleted grazing and browsing, may be seen as a form
of natural culling and a first step in rehabilitation of the area.

[edit] Biological warfare

Anthrax was first tested as a biological warfare agent by Unit 731 of the Japanese Kwantung Army in
Manchuria during the 1930s; some of this testing involved intentional infection of prisoners of war,
thousands of whom died. Anthrax, designated at the time as Agent N, was also investigated by the allies
in the 1940s. Weaponized anthrax was part of the U.S. stockpile prior to 1972, when the United States
signed the Biological Weapons Convention.[38]

74
Colin Powell holding a model vial of anthrax while giving a presentation to the United Nations Security
Council

Anthrax spores can and have been used as a biological warfare weapon. Its first modern incidence
occurred when Scandinavian freedom fighters ("the rebel groups") supplied by the German General Staff
used anthrax with unknown results against the Imperial Russian Army in Finland in 1916. [39] There is a
long history of practical bioweapons research in this area. For example, in 1942 British bioweapons
trials[40] severely contaminated Gruinard Island in Scotland with anthrax spores of the Vollum-14578
strain, making it a no-go area until it was decontaminated in 1990.[41][42] The Gruinard trials involved
testing the effectiveness of a submunition of an "N-bomb"—a biological weapon. Additionally, five
million "cattle cakes" impregnated with anthrax were prepared and stored at Porton Down in "Operation
Vegetarian"—an anti-livestock weapon intended for attacks on Germany by the Royal Air Force.[43] The
infected cattle cakes were to be dropped on Germany in 1944. However neither the cakes nor the bomb
was used; the cattle cakes were incinerated in late 1945.

More recently, the Rhodesian government used anthrax against cattle and humans in the period 1978–
1979 during its war with black nationalists.[44]

American military and British Army personnel are routinely vaccinated against anthrax prior to active
service in places where biological attacks are considered a threat. The anthrax vaccine, produced by
BioPort Corporation, contains non-living bacteria, and is approximately 93% effective in preventing
infection.[citation needed]

Weaponized stocks of anthrax in the US were destroyed in 1971–72 after President Nixon ordered the
dismantling of US biowarfare programs in 1969 and the destruction of all existing stockpiles of
bioweapons.

The Soviet Union created and stored 100 to 200 tons of anthrax spores at Kantubek on Vozrozhdeniya
Island. They were abandoned in 1992 and destroyed in 2002.

Sverdlovsk incident

2 April 1979

Main article: Sverdlovsk anthrax leak

Despite signing the 1972 agreement to end bioweapon production the government of the Soviet Union
had an active bioweapons program that included the production of hundreds of tons of weapons-grade

75
anthrax after this period. On 2 April 1979, some of the over one million people living in Sverdlovsk (now
called Ekaterinburg, Russia), about 850 miles east of Moscow, were exposed to an accidental release of
anthrax from a biological weapons complex located near there. At least 94 people were infected, of whom
at least 68 died. One victim died four days after the release, ten over an eight-day period at the peak of the
deaths, and the last six weeks later. Extensive cleanup, vaccinations and medical interventions managed
to save about 30 of the victims.[45] Extensive cover-ups and destruction of records by the KGB continued
from 1979 until Russian President Boris Yeltsin admitted this anthrax accident in 1992. Jeanne Guillemin
reported in 1999 that a combined Russian and United States team investigated the accident in
1992.[46][47][48]

Nearly all of the night shift workers of a ceramics plant directly across the street from the biological
facility (compound 19) became infected, and most died. Since most were men, there were suspicions by
NATO governments that the Soviet Union had developed a sex-specific weapon.[49] The government
blamed the outbreak on the consumption of anthrax-tainted meat and ordered the confiscation of all
uninspected meat that entered the city. They also ordered that all stray dogs be shot and that people not
have contact with sick animals. There was also a voluntary evacuation and anthrax vaccination program
established for people from 18–55.[50]

To support the cover-up story Soviet medical and legal journals published articles about an outbreak in
livestock that caused GI anthrax in people having consumed infected meat, and cutaneous anthrax in
people having come into contact with the animals. All medical and public health records were confiscated
by the KGB.[50] In addition to the medical problems that the outbreak caused, it also prompted Western
countries to be more suspicious of a covert Soviet Bioweapons program and to increase their surveillance
of suspected sites. In 1986, the US government was allowed to investigate the incident, and concluded
that the exposure was from aerosol anthrax from a military weapons facility. [51] In 1992, President Yeltsin
admitted that he was "absolutely certain" that "rumors" about the Soviet Union violating the 1972
Bioweapons Treaty were true. The Soviet Union, like the US and UK, had agreed to submit information
to the UN about their bioweapons programs but omitted known facilities and never acknowledged their
weapons program.[49]

Anthrax bioterrorism

In theory, anthrax spores can be cultivated with minimal special equipment and a first-year collegiate
microbiological education, but in practice the procedure is difficult and dangerous. To make large
amounts of an aerosol form of anthrax suitable for biological warfare requires extensive practical
knowledge, training, and highly advanced equipment.[citation needed]

Concentrated anthrax spores were used for bioterrorism in the 2001 anthrax attacks in the United States,
delivered by mailing postal letters containing the spores.[52] The letters were sent to several news media
offices as well as to two Democratic senators: Tom Daschle of South Dakota and Patrick Leahy of
Vermont. As a result, 22 were infected and five died.[12] Only a few grams of material were used in these
attacks and in August 2008 the US Department of Justice announced they believed that Dr. Bruce Ivins, a
senior biodefense researcher employed by the United States government, was responsible.[53] These
events also spawned many anthrax hoaxes.

Due to these events, the U.S. Postal Service installed biohazard detection systems at its major distribution
centers to actively scan for anthrax being transported through the mail.[54]

Decontaminating mail

76
In response to the postal anthrax attacks and hoaxes the US Postal Service sterilized some mail using a
process of gamma irradiation and treatment with a proprietary enzyme formula supplied by Sipco
Industries Ltd.[55]

A scientific experiment performed by a high school student, later published in The Journal of Medical
Toxicology, suggested that a domestic electric iron at its hottest setting (at least 400 °F (204 °C)) used for
at least 5 minutes should destroy all anthrax spores in a common postal envelope

Bubonic plague

From Wikipedia, the free encyclopedia

Jump to: navigation, search

This article is about the disease in general. For information about the medieval European plague, see
Black Death.

Bubonic plague

Classification and external resources

ICD-10 A200 [1]

ICD-9 02 0.0 [1]

DiseasesDB 14226

Bubonic plague is the best known manifestation of the Plague, a zoonotic disease, circulating mainly
among small rodents and their fleas[2], and is one of three types of infections caused by Yersinia pestis
(formerly known as Pasteurella pestis), which belongs to the family Enterobacteriaceae.

77
The term bubonic plague is derived from the Greek word bubo, meaning "swollen gland". Swollen lymph
nodes (buboes) especially occur in the armpit and groin in persons suffering from bubonic plague.
Bubonic plague was often used synonymously for plague, but it does in fact refer specifically to an
infection that enters through the skin and travels through the lymphatics, as is often seen in flea-borne
infections.

In humans, the bubonic plague kills about two out of three infected patients in 2–6 days without
treatment. The latest research shows that the bubonic plague was the cause of the Black Death that swept
through Europe in the 14th century and killed an estimated 75 million people, 30-60% of the European
population.[3] Because the plague killed so many of the working population, wages rose and some
historians have seen this as a turning point in European economic development.[4][5]

Contents

[hide]

 1 Signs and symptoms


 2 Pathophysiology
 3 Treatment
 4 Laboratory testing
 5 History
o 5.1 European plague control officer killed in Pune
 6 Biological warfare
 7 See also
 8 Footnotes
 9 References
 10 Further reading
o 10.1 Books
o 10.2 Articles

[edit] Signs and symptoms

The most famous symptom of bubonic plague is painful, swollen lymph glands, called buboes. These are
commonly found in the armpits, groin or neck. Due to its bite-based form of infection, the bubonic plague
is often the first step of a progressive series of illnesses. Two other types are pneumonic and septicemic.
However, pneumonic plague, unlike the bubonic or septicemic, induced coughing, and was also very
infectious and allowed person-to-person spread. Bubonic plague symptoms appear suddenly, usually 2–5
days after exposure to the bacteria. Symptoms include:

 Chills
 General ill feeling (malaise)
 High fever (102 degrees fahrenheit / 39 degrees celsius)
 Muscle pain
 Severe headache
 Seizures
 Smooth, painful lymph gland swelling called a bubo, commonly found in the groin, but may
occur in the armpits or neck, most often at the site of the initial infection (bite or scratch)
 Pain may occur in the area before the swelling appears

78
Other symptoms include heavy breathing, continuous blood vomiting, urination of blood[citation needed],
aching limbs, coughing, and extreme pain. The pain is usually caused by the decaying or decomposing of
the skin while the person is still alive. Additional symptoms include extreme tiredness, gastrointestinal
problems, lenticulae (black dots scattered throughout the body), delirium and coma.

[edit] Pathophysiology

Bubonic plague is an infection of the lymphatic system, usually resulting from the bite of an infected flea,
Xenopsylla cheopis (the rat flea). The fleas are often found on rodents such as rats and mice, and seek out
other prey when their rodent hosts die. The bacteria form aggregates in the gut of infected fleas and this
results in the flea regurgitating ingested blood, which is now infected, into the bite site of a rodent or
human host. Once established, bacteria rapidly spread to the lymph nodes and multiply. Y. pestis bacilli
can resist phagocytosis and even reproduce inside phagocytes and kill them. As the disease progresses,
the lymph nodes can haemorrhage and become swollen and necrotic. Bubonic plague can progress to
lethal septicemic plague in some cases. The plague is also known to spread to the lungs and become the
disease known as the pneumonic plague. This form of the disease is highly infectious as the bacteria can
be transmitted in droplets emitted when coughing or sneezing, as well as physical contact with victims of
the plague or flea-bearing rodents that carry the plague.

[edit] Treatment

In modern times, several classes of antibiotics are effective in treating bubonic plague. These include
aminoglycosides such as streptomycin and gentamicin, tetracyclines (especially doxycycline), and the
fluoroquinolone ciprofloxacin. Mortality associated with treated cases of bubonic plague is about 1-15%,
compared to a mortality rate of 50-90% in untreated cases.[6]

[edit] Laboratory testing

Laboratory testing is required, in order to diagnose and confirm plague. Ideally, confirmation is through
the identification of Y. pestis culture from a patient sample. Confirmation of infection can be done by
examining serum taken during the early and late stages of infection. To quickly screen for the Y. pestis
antigen in patients, rapid dipstick tests have been developed for field use.[7]

[edit] History

Main articles: Plague of Justinian, Black Death, and Third Pandemic

Bubonic plague is believed to have claimed nearly 200 million lives, although there is some debate as to
whether all of the plagues attributed to it are in fact the same disease. The first recorded epidemic ravaged
the Byzantine Empire during the sixth century, and was named the Plague of Justinian after emperor
Justinian I, who was infected but survived through extensive treatment.[8][9]

It is generally held that the most infamous and devastating outbreak of bubonic plague was the Black
Death, which killed a third of the population of Europe in the 14th century. In affected cities, proper
burial rituals were abandoned and bodies were buried in mass graves, or abandoned in the street. The
Black Death is thought to have originated in the Gobi Desert. Carried by the fleas on rats, it spread along
trade routes and reached the Crimea in 1346. (It also spread eastward to the Yangtse river valley, and the
resulting epidemic, ignored by the government, brought down the Yuan dynasty[citation needed].) In 1347 it
spread to Constantinople and then Alexandria, killing thousands every day, and soon arrived in Western
Europe.

79
Though bubonic plague is generally regarded as the probable pathogen responsible for the Black Death
outbreak, there are significant differences between symptoms and spread of the Black Death and more
recent bubonic plague outbreaks and several alternate theories of the Black Death have been proposed
involving pathogens other than Y. pestis.

The next few centuries were marked by several local outbreaks of lesser severity. The Great Plague of
Seville, 1649, the Great Plague of London, 1665–1666, the Great Plague of Vienna, 1679, and the Great
Plague of Marseille, 1720, were the last major outbreaks of the bubonic plague in Europe.

A popular folk etymology holds that the children's game of "Ring Around the Rosy" (or Ring a Ring o'
Roses) is derived from the appearance of the bubonic plague. Proponents claim that "Ring around the
rosy" refers to the rosy-red, rash-like ring that appeared as a symptom of the plague. "Pocket full of posy"
referred to carrying flower petals as at the time it was believed the disease was spread through the ether of
unhygene, and scent stopped the spread. "Ashes, ashes" referred to the burning of infected corpses (in the
UK the words of the rhyme are "atishoo, atishoo" mimicking sneezing), and "we all fall down" referred to
the virulent deaths attributed to the plague.[10] Many folklorists however hold that the association of this
rhyme with plague is baseless.[11][12]

In 1994 and 2010 there have been cases reported in Peru. In 2010 there was a case reported in Oregon,
United States. [13]

[edit] European plague control officer killed in Pune

Directions for searchers, Pune plague of 1897

The plague resurfaced in the mid-19th century; like the Black Death, the Third Pandemic began in Central
Asia. The disease killed millions in China and India - then a British colony - and then spread worldwide.
The outbreak continued into the early 20th century. In 1897, the city of Pune in India was severely
affected by the outbreak. The government responded to the plague with a committee system that used the
military to perpetrate repression and tyranny as it tackled the pandemic. Nationalists publicly berated the
government. On 22 June 1897, two young brahmins, the Chapekar brothers, shot and killed two British
officers, the Committee chairman and his military escort. This act has been considered a landmark event
in India's struggle for freedom as well as the worst violence against political authority seen in the world
during the third plague pandemic.[14] The award winning Marathi film 22 June 1897 covers events prior to
the assassination, the act and its aftermath.[15][16]

[edit] Biological warfare

80
Plague was used during the Second Sino-Japanese War as a bacteriological weapon by the Imperial
Japanese Army. These weapons were provided by Shirō Ishii's units and used in experiments on humans
before being used on the field. For example, in 1940, the Imperial Japanese Army Air Service bombed
Ningbo with fleas carrying the bubonic plague.[17] During the Khabarovsk War Crime Trials, the accused,
such as Major General Kiyashi Kawashima, testified that, in 1941, some 40 members of Unit 731 air-
dropped plague-contaminated fleas on Changde. These operations caused epidemic plague outbreaks.[18]

Chickenpox

From Wikipedia, the free encyclopedia

Jump to: navigation, search

"Varicella" redirects here. For the interactive fiction computer game, see Varicella (computer game).

For other uses, see Chickenpox (disambiguation).

Chickenpox

Classification and external resources

Child with varicella disease

ICD-10 B01.

ICD-9 052

81
DiseasesDB 29118

MedlinePlus 001592

eMedicine ped/2385 derm/74, emerg/367

MeSH C02.256.466.175

Chickenpox or chicken pox is a highly contagious illness caused by primary infection with varicella
zoster virus (VZV).[1] It usually starts with vesicular skin rash mainly on the body and head rather than at
the periphery and becomes itchy, raw pockmarks, which mostly heal without scarring.

Chicken pox is an airborne disease spread easily through coughing or sneezing of ill individuals or
through direct contact with secretions from the rash. A person with chickenpox is infectious from one to
five days before the rash appears.[2] The contagious period continues for 4 to 5 days after the appearance
of the rash, or until all lesions have crusted over. Immunocompromised patients are probably contagious
during the entire period new lesions keep appearing. Crusted lesions are not contagious. [3]

It takes from 10 to 21 days after contact with an infected person for someone to develop chickenpox.

Chickenpox is often heralded by a prodrome of myalgia, nausea, fever, headache, sore throat, pain in both
ears, complaints of pressure in head or swollen face, and malaise in adolescents and adults, while in
children the first symptom is usually the development of a papular rash, followed by development of
malaise, fever (a body temperature of 38 °C (100 °F), but may be as high as 42 °C (108 °F) in rare cases),
and anorexia. Typically, the disease is more severe in adults. [4] Chickenpox is rarely fatal, although it is
generally more severe in adult males than in adult females or children. Pregnant women and those with a
suppressed immune system are at highest risk of serious complications. Chicken pox is believed to be the
cause of one third of stroke cases in children.[5] The most common late complication of chicken pox is
shingles, caused by reactivation of the varicella zoster virus decades after the initial episode of
chickenpox.

Chickenpox has been observed in other primates, including chimpanzees[6] and gorillas.[7]

The disease is not related in any way to chickens; the name uses "chicken" in the sense of "weak" or
"cowardly" (i.e., a "wimpier" version of smallpox).[8]

82
A single blister, typical during the early stages of the rash

The back of a 30-year-old male, taken on day 5 of the rash

Contents

[hide]

 1 Diagnosis
 2 Epidemiology
 3 Pathophysiology
o 3.1 Infection in pregnancy and neonates

83
o 3.2 Shingles
 4 Prevention
o 4.1 Hygiene measures
o 4.2 Vaccine
 5 Treatment
o 5.1 Children
o 5.2 Adults
 6 Prognosis
 7 History
 8 See also
 9 References
 10 External links

[edit] Diagnosis

The diagnosis of varicella is primarily clinical, with typical early "prodromal" symptoms, and then the
characteristic rash. Confirmation of the diagnosis can be sought through either examination of the fluid
within the vesicles of the rash, or by testing blood for evidence of an acute immunologic response.

Vesicular fluid can be examined with a Tsanck smear, or better with examination for direct fluorescent
antibody. The fluid can also be "cultured", whereby attempts are made to grow the virus from a fluid
sample. Blood tests can be used to identify a response to acute infection (IgM) or previous infection and
subsequent immunity (IgG).[9]

Prenatal diagnosis of fetal varicella infection can be performed using ultrasound, though a delay of 5
weeks following primary maternal infection is advised. A PCR (DNA) test of the mother's amniotic fluid
can also be performed, though the risk of spontaneous abortion due to the amniocentesis procedure is
higher than the risk of the baby developing foetal varicella syndrome.[10]

[edit] Epidemiology

Primary varicella is an endemic disease. Cases of varicella are seen throughout the year but more
commonly in winter and early spring. Varicella is one of the classic diseases of childhood, with the
highest prevalence in the 4–10 year old age group. Like rubella, it is uncommon in preschool children.
Varicella is highly communicable, with an infection rate of 90% in close contacts. Most people become
infected before adulthood but 10% of young adults remain susceptible.

Historically, varicella has been a disease predominantly affecting young school-aged children. In adults
the pock marks are darker and the scars more prominent than in children.[11]

[edit] Pathophysiology

Exposure to VZV in a healthy child initiates the production of host immunoglobulin G (IgG),
immunoglobulin M (IgM), and immunoglobulin A (IgA) antibodies; IgG antibodies persist for life and
confer immunity. Cell-mediated immune responses are also important in limiting the scope and the
duration of primary varicella infection. After primary infection, VZV is hypothesized to spread from
mucosal and epidermal lesions to local sensory nerves. VZV then remains latent in the dorsal ganglion

84
cells of the sensory nerves. Reactivation of VZV results in the clinically distinct syndrome of herpes
zoster (i.e., shingles), and sometimes Ramsay Hunt syndrome type II.[citation needed]

[edit] Infection in pregnancy and neonates

For pregnant women, antibodies produced as a result of immunization or previous infection are
transferred via the placenta to the fetus.[12] Women who are immune to chickenpox cannot become
infected and do not need to be concerned about it for themselves or their infant during pregnancy.[13]

Varicella infection in pregnant women could lead to viral transmission via the placenta and infection of
the fetus. If infection occurs during the first 28 weeks of gestation, this can lead to fetal varicella
syndrome (also known as congenital varicella syndrome).[14] Effects on the fetus can range in severity
from underdeveloped toes and fingers to severe anal and bladder malformation. Possible problems
include:

 Damage to brain: encephalitis,[15] microcephaly, hydrocephaly, aplasia of brain


 Damage to the eye: optic stalk, optic cup, and lens vesicles, microphthalmia, cataracts,
chorioretinitis, optic atrophy
 Other neurological disorder: damage to cervical and lumbosacral spinal cord, motor/sensory
deficits, absent deep tendon reflexes, anisocoria/Horner's syndrome
 Damage to body: hypoplasia of upper/lower extremities, anal and bladder sphincter dysfunction
 Skin disorders: (cicatricial) skin lesions, hypopigmentation

Infection late in gestation or immediately following birth is referred to as "neonatal varicella".[16]


Maternal infection is associated with premature delivery. The risk of the baby developing the disease is
greatest following exposure to infection in the period 7 days prior to delivery and up to 7 days following
the birth. The baby may also be exposed to the virus via infectious siblings or other contacts, but this is of
less concern if the mother is immune. Newborns who develop symptoms are at a high risk of pneumonia
and other serious complications of the disease.[10]

[edit] Shingles

Main article: Shingles

After a chickenpox infection, the virus remains dormant in the body's nerve tissues. The immune system
keeps the virus at bay, but later in life, usually as an adult, it can be reactivated and cause a different form
of the viral infection called shingles.[17]

[edit] Prevention

[edit] Hygiene measures

The spread of chicken pox can be prevented by isolating affected individuals. Contagion is by exposure to
respiratory droplets, or direct contact with lesions, within a period lasting from three days prior to the
onset of the rash, to four days after the onset of the rash.[18] Therefore, avoidance of close proximity or
physical contact with affected individuals during that period will prevent contagion. The chicken pox
virus (VZV) is susceptible to disinfectants, notably chlorine bleach (i.e., sodium hypochlorite). Also, like
all enveloped viruses, VZV is sensitive to desiccation, heat and detergents. Therefore these viruses are
relatively easy to kill.

85
[edit] Vaccine

Main article: Varicella vaccine

A varicella vaccine was first developed by Michiaki Takahashi in 1974 derived from the Oka strain. It has
been available in the U.S. since 1995 to inoculate against the disease. Some countries require the varicella
vaccination or an exemption before entering elementary school. Protection is not lifelong and further
vaccination is necessary five years after the initial immunization.[19] The chickenpox vaccine is not part of
the routine childhood vaccination schedule in the UK. In the UK, the vaccine is currently only offered to
people who are particularly vulnerable to chickenpox.[20]

[edit] Treatment

This section requires expansion.

Varicella treatment mainly consists of easing the symptoms as there is no actual cure of the condition.
Some treatments are however available for relieving the symptoms while the immune system clears the
virus from the body. As a protective measure, patients are usually required to stay at home while they are
infectious to avoid spreading the disease to others. Also, sufferers are frequently asked to cut their nails
short or to wear gloves to prevent scratching and to minimize the risk of secondary infections.

The condition resolves by itself within a couple of weeks but meanwhile patients must pay attention to
their personal hygiene.[21] The rash caused by varicella zoster virus may however last for up to one month,
although the infectious stage does not take longer than a week or two.[22] Also, staying in a cold
surrounding can help in easing the itching as heat and sweat makes it worse.

Although there have been no formal clinical studies evaluating the effectiveness of topical application of
calamine lotion, a topical barrier preparation containing zinc oxide and one of the most commonly used
interventions, it has an excellent safety profile.[23] It is important to maintain good hygiene and daily
cleaning of skin with warm water to avoid secondary bacterial infection.[24] Scratching may also increase
the risk of secondary infection.[25] Addition of a small quantity of vinegar to the water is sometimes
advocated.

To relieve the symptoms of chicken pox, people commonly use anti-itching creams and lotions. These
lotions are not to be used on the face or close to the eyes. An oatmeal bath also might help ease
discomfort.[26]

Natural chicken pox remedies include pea water, baking soda, vitamin E oil, honey, herbal tea or carrot
and coriander. It is believed that the irritation of the skin can be relieved to some extent with water in
which fresh peas have been cooked.[27] A lotion made of baking soda with water can be sponged onto the
skin of the patients to ease the itching. Also, rubbing vitamin E oil or honey on the skin is thought to have
a healing effect on the marks that could remain after the infection has been cured. Some people claim that
the mild sedative effect of green tea is effective in relieving the symptoms. It is not however known to
what extent these home remedies can actually help the patients cope better with their symptoms.

A varicella vaccine is available for people who have been exposed to the virus, but have not experienced
symptoms. The vaccine is more effective if administered within three days and up to five days after
exposure. It has been shown that the chicken pox vaccine may prevent or reduce the symptoms in 90% of

86
cases, if given within three days after exposure. People who have been exposed to the virus but who are
contraindicated to receive the vaccine, there is a medication available, called varicella zoster
immunoglobulin or VZIG which may prevent or reduce the symptoms after exposure. VZIG is primarily
administered to individuals who are at risk of developing complications due to its high costs and
temporary protection. This type of treatment is only recommended in newborns whose mothers have had
chicken pox few days prior or after delivery, children with leukemia or lymphoma, people with a poor
immune system or pregnant women. VZIG is recommended to be administered no later than 96 hours
after exposure to the virus.

[edit] Children

If oral acyclovir is started within 24 hours of rash onset it decreases symptoms by one day but has no
effect on complication rates. Use of acyclovir therefore is not currently recommended for
immunocompetent individuals (i.e., otherwise healthy persons without known immunodeficiency or on
immunosuppressive medication).[28]

Treatment of chicken pox in children is aimed at symptoms whilst the immune system deals with the
virus.[29] With children younger than 12 years cutting nails and keeping them clean is an important part of
treatment as they are more likely to deep scratch their blisters. Children younger than 12 years old and
older than one month are not meant to receive antiviral medication if they are not suffering from another
medical condition which would put them at risk of developing complications.

Increased amounts of water are recommended to avoid dehydration, especially if the child develops fever.
Fever, headaches or pain can be relieved with painkillers such as paracetamol. Children who are older
than one year may be administered antihistamine tablets or liquid medicines which are helpful in cases
when the child is not able to sleep because of the itching. There is some uncertainty about the use and
safety with ibuprofen

Acyclovir or immunoglobulin is generally prescribed in children who are at risk of complications from
chicken pox. They receive the same treatment as the one mentioned above plus antiviral medication
additionally. The category of children that are considered at risk to develop complications includes infants
less than one month old, those with a suppressed immune system, those who are taking steroids or
immune suppressing medication or children with severe heart, lung and skin conditions. Moreover, adults
and teenagers are considered at risk of complications and they are normally administered antiviral
medication.

Aspirin is highly contraindicated in children younger than 16 years as it has been related with a
potentially fatal condition known as Reye's syndrome.

[edit] Adults

Infection in otherwise healthy adults tends to be more severe and active; treatment with antiviral drugs
(e.g. acyclovir) is generally advised, as long as it is started within 24–48 hours from rash onset.[30]

Remedies to ease the symptoms of chicken pox in adults are basically the same as those used on children.
Moreover, adults are often prescribed antiviral medication as it is effective in reducing the severity of the
condition and the likelihood of developing complications. Antiviral medicines are not however aimed to
kill the virus, but to stop it from multiplying.

87
Adults are also advised to increase water intake to reduce dehydration and to relieve headaches.
Painkillers such as paracetamol and ibuprofen are also recommended as they are effective in relieving
itching and other symptoms such as fever or pains. Antihistamines may be used in cases when the
symptoms cause the inability to sleep, as they are efficient for easing the itching and they are acting as a
sedative.

As with children, antiviral medication is considered more useful for those adults who are more prone to
develop complications. These include pregnant women or people who have a poor immune system.[31]

Sorivudine, a nucleoside analogue has been found in few case reports effective in the treatment of
primary varicella in healthy adults. Larger scale clinical trials are needed to demonstrate the efficacy of
this medication.[32]

[edit] Prognosis

The duration of the visible blistering caused by varicella zoster virus varies in children usually from 4 to 7
days, and the appearance of new blisters begins to subside after the 5th day. Chickenpox infection is
milder in young children, and symptomatic treatment, with sodium bicarbonate baths or antihistamine
medication may ease itching.[33] Paracetamol (acetaminophen) is widely used to reduce fever. Aspirin, or
products containing aspirin, should not be given to children with chickenpox as it can cause Reye's
Syndrome.[34]

In adults, the disease is more severe,[35] though the incidence is much less common. Infection in adults is
associated with greater morbidity and mortality due to pneumonia,[36] hepatitis, and encephalitis.[citation
needed]
In particular, up to 10% of pregnant women with chickenpox develop pneumonia, the severity of
which increases with onset later in gestation. In England and Wales, 75% of deaths due to chickenpox are
in adults.[10] Inflammation of the brain, or encephalitis, can occur in immunocompromised individuals,
although the risk is higher with herpes zoster.[37] Necrotizing fasciitis is also a rare complication.[38]

Secondary bacterial infection of skin lesions, manifesting as impetigo, cellulitis, and erysipelas, is the
most common complication in healthy children. Disseminated primary varicella infection usually seen in
the immunocompromised may have high morbidity. Ninety percent of cases of varicella pneumonia occur
in the adult population. Rarer complications of disseminated chickenpox also include myocarditis,
hepatitis, and glomerulonephritis.[39]

Hemorrhagic complications are more common in the immunocompromised or immunosuppressed


populations, although healthy children and adults have been affected. Five major clinical syndromes have
been described: febrile purpura, malignant chickenpox with purpura, postinfectious purpura, purpura
fulminans, and anaphylactoid purpura. These syndromes have variable courses, with febrile purpura being
the most benign of the syndromes and having an uncomplicated outcome. In contrast, malignant
chickenpox with purpura is a grave clinical condition that has a mortality rate of greater than 70%. The
etiology of these hemorrhagic chickenpox syndromes is not known.[39]

[edit] History

88
Early rash of smallpox vs chickenpox: rash mostly on the torso is characteristic of chickenpox

Chickenpox was first identified by Persian scientist Muhammad ibn Zakariya ar-Razi (865–925), known
to the West as "Rhazes", who clearly distinguished it from smallpox and measles.[40] Giovanni Filippo
(1510–1580) of Palermo later provided a more detailed description of varicella (chickenpox).

Foot-and-mouth disease

From Wikipedia, the free encyclopedia

Jump to: navigation, search

Not to be confused with hand, foot and mouth disease.

Foot-and-mouth disease

Classification and external resources

Ruptured oral blister in diseased cow.

89
ICD-10 B08.8 (ILDS B08.820)

ICD-9 078.4

DiseasesDB 31707

MeSH D005536

Foot-and-mouth disease or hoof-and-mouth disease (Aphtae epizooticae) is an infectious and


sometimes fatal viral disease that affects cloven-hoofed animals, including domestic and wild bovids. The
virus causes a high fever for two or three days, followed by blisters inside the mouth and on the feet that
may rupture and cause lameness.

Foot-and-mouth disease is a severe plague for animal farming, since it is highly infectious and can be
spread by infected animals through aerosols, through contact with contaminated farming equipment,
vehicles, clothing or feed, and by domestic and wild predators.[1] Its containment demands considerable
efforts in vaccination, strict monitoring, trade restrictions and quarantines, and occasionally the
elimination of millions of animals.

Susceptible animals include cattle, water buffalo, sheep, goats, pigs, antelope, deer, and bison. It has also
been known to infect hedgehogs and elephants,[2][1] and llama and alpaca may develop mild symptoms,
but are resistant to the disease and do not pass it on to others of the same species. [1] In laboratory
experiments, mice and rats and chickens have been successfully infected by artificial means, but it is not
believed that they would contract the disease under natural conditions.[1] Humans are very rarely affected.

The virus responsible for the disease is a picornavirus, the prototypic member of the genus Aphthovirus.
Infection occurs when the virus particle is taken into a cell of the host. The cell is then forced to
manufacture thousands of copies of the virus, and eventually bursts, releasing the new particles in the
blood. The virus is highly variable,[3] which limits the effectiveness of vaccination.

Contents

[hide]

 1 History
 2 Clinical signs
 3 Transmission
 4 Foot-and-mouth disease infecting humans
 5 Vaccination
 6 Epizootics
o 6.1 United States 1914-1929
o 6.2 United Kingdom 1967
o 6.3 Taiwan 1997
o 6.4 United Kingdom 2001

90
o 6.5 China 2005
o 6.6 United Kingdom 2007
o 6.7 Japan and South Korea 2010–2011
o 6.8 Bulgaria 2011
 7 Economic and ethical issues
 8 See also
 9 References
 10 External links

[edit] History

The cause of FMD was first shown to be viral in 1897 by Friedrich Loeffler. He passed the blood of an
infected animal through a Chamberland filter and found that the fluid that was collected could still cause
the disease in healthy animals.

FMD occurs throughout much of the world, and whilst some countries have been free of FMD for some
time, its wide host range and rapid spread represent cause for international concern. After World War II,
the disease was widely distributed throughout the world. In 1996, endemic areas included Asia, Africa,
and parts of South America; as of August 2007, Chile is disease free,[4] and Uruguay and Argentina have
not had an outbreak since 2001. North America and Australia have been free of FMD for many years.
New Zealand has never had a case of foot and mouth disease.[5] Most European countries have been
recognized as disease free, and countries belonging to the European Union have stopped FMD
vaccination.

However, in 2001, a serious outbreak of FMD in Britain resulted in the slaughter of many animals, the
postponing of the general election for a month, and the cancellation of many sporting events and leisure
activities such as the Isle of Man TT. Due to strict government policies on sale of livestock, disinfection
of all persons leaving and entering farms and the cancellation of large events likely to be attended by
farmers, a potentially economically disastrous epizootic was avoided in the Republic of Ireland[citation
needed]
, with just one case recorded in Proleek, Co. Louth. In August 2007, FMD was found at two farms in
Surrey, England. All livestock were culled and a quarantine erected over the area. There have since been
two other suspected outbreaks, although these seem now not to be related to FMD. The only reported
cases in 2010 were a false alarm from GIS Alex Baker, as proven false by the Florida Farm and
Agricultural Department, and confirmed quarantine/slaughter of cows and pigs has been reported from
Miyazaki prefecture in Japan in the month of June after three cows tested positive. A total of some
270,000 cattle have been ordered slaughtered following the disease's outbreak.

[edit] Clinical signs

91
Ruptured blisters on the feet of a pig

The incubation period for foot-and-mouth disease virus has a range between 2 and 12 days.[6] The disease
is characterized by high fever that declines rapidly after two or three days; blisters inside the mouth that
lead to excessive secretion of stringy or foamy saliva and to drooling; and blisters on the feet that may
rupture and cause lameness. Adult animals may suffer weight loss from which they do not recover for
several months as well as swelling in the testicles of mature males, and in cows, milk production can
decline significantly. Though most animals eventually recover from FMD, the disease can lead to
myocarditis (inflammation of the heart muscle) and death, especially in newborn animals. Some infected
animals remain asymptomatic, but they nonetheless carry FMD and can transmit it to others.

[edit] Transmission

The foot-and-mouth disease virus can be transmitted in a number of ways including close contact animal
to animal spread, long-distance aerosol spread and fomites or inanimate objects, typically fodder and
motor vehicles. The clothes and skin of animal handlers such as farmers, standing water, and uncooked
food scraps and feed supplements containing infected animal products can harbor the virus as well. Cows
can also catch FMD from the semen of infected bulls. Control measures include quarantine and
destruction of infected livestock, and export bans for meat and other animal products to countries not
infected with the disease.

Just as humans may spread the disease by carrying the germs on their clothes and body, animals that are
not susceptible to the disease may still aid in spreading it. This was the case in Canada in 1952, when an
outbreak flared up again after dogs had carried off bones from dead animals.[1] Wolves are thought to play
a similar role in the former Soviet Union.[7]

[edit] Foot-and-mouth disease infecting humans

Humans can be infected with foot-and-mouth disease through contact with infected animals, but this is
extremely rare. Some cases were caused by laboratory accidents. Because the virus that causes FMD is
sensitive to stomach acid, it cannot spread to humans via consumption of infected meat, except in the

92
mouth before the meat is swallowed. In the UK, the last confirmed human case occurred in 1966, [8][9] and
only a few other cases have been recorded in countries of continental Europe, Africa, and South America.
Symptoms of FMD in humans include malaise, fever, vomiting, red ulcerative lesions (surface-eroding
damaged spots) of the oral tissues, and sometimes vesicular lesions (small blisters) of the skin. According
to a newspaper report, FMD killed two children in England in 1884, supposedly due to infected milk.[10]

There is another viral disease with similar symptoms, commonly referred to as "hand, foot and mouth
disease", that occurs more frequently in humans, especially in young children; the cause, Coxsackie A
virus, is different from FMDV. Coxsackie viruses belong to the Enteroviruses within the Picornaviridae.

Because FMD rarely infects humans, but spreads rapidly among animals, it is a much greater threat to the
agriculture industry than to human health. Farmers around the world can lose huge amounts of money
during a foot-and-mouth epizootic, when large numbers of animals are destroyed and revenues from milk
and meat production go down.

[edit] Vaccination

Plum Island Animal Disease Center

Seven main types of FMDV are believed to exist.[11] Like other viruses, the FMD virus continually
evolves and mutates, thus one of the difficulties in vaccinating against FMD is the huge variation between
and even within serotypes. There is no cross-protection between serotypes (meaning that a vaccine for
one serotype will not protect against any others) and in addition, two strains within a given serotype may
have nucleotide sequences that differ by as much as 30% for a given gene. This means that FMD vaccines
must be highly specific to the strain involved. Vaccination only provides temporary immunity that lasts
from months to years.

Currently, the World Organisation for Animal Health recognizes countries to be in one of three disease
states with regards to FMD: FMD present with or without vaccination, FMD-free with vaccination, and
FMD-free without vaccination. Countries that are designated FMD-free without vaccination have the
greatest access to export markets, and therefore many developed nations, including Canada, the United
States, and the UK, work hard to maintain their current FMD-free without vaccination status.

93
There are several reasons cited for restricting export from countries using FMD vaccines. The most
important is probably that routine blood tests, relying on antibodies, cannot distinguish between an
infected and a vaccinated animal.[12] This would severely hamper screening of animals used in export
products, risking a spread of FMD to importing countries. A widespread preventive vaccination would
also conceal the existence of the virus in a country. From there, it could potentially spread to countries
without vaccine programs. Lastly, an animal that is infected shortly after being vaccinated can harbour
and spread FMD without showing symptoms itself, hindering containment and culling of sick animals as
a remedy.

Many early vaccines used dead samples of FMDV to inoculate animals. However, those early vaccines
sometimes caused real outbreaks. In the 1970s, scientists discovered that a vaccine could be made using
only a single key protein from the virus. The task was to produce such quantities of the protein that could
be used in the vaccination. On June 18, 1981, the U.S. government announced the creation of a vaccine
targeted against FMD; this was the world's first genetically engineered vaccine.

The North American FMD Vaccine Bank is housed at the United States Department of Agriculture's
(USDA) Foreign Animal Disease Diagnostic Laboratory (FADDL) at Plum Island Animal Disease
Center. The Center, located 1.5 miles (2.4 km) off the coast of Long Island, NY, is the only place in the
United States where scientists can conduct research and diagnostic work on highly contagious animal
diseases such as FMD. Because of this limitation US companies working on FMD usually use facilities in
other countries where such diseases are endemic.

[edit] Epizootics

[edit] United States 1914-1929

The US has had 9 FMD outbreaks since 1870. The most devastating outbreak happened in 1914. It
originated from Michigan, but it was its entry into the stockyards in Chicago that turned it into an
epizootic. About 3,500 livestock herds were infected across the US, totaling over 170,000 cattle, sheep
and swine. The eradication came at a cost of 4.5 million 1914 USD. A 1924 outbreak in California
resulted not only in the slaughter of 109,000 farm animals, but also 22,000 deer. The US saw its latest
FMD outbreak in Montebello, California in 1929. This outbreak originated in hogs that had eaten infected
meat scraps from a tourist steamship that had stocked meat in Argentina. Over 3,600 animals were
slaughtered and the disease was contained in less than a month.[13][14]

[edit] United Kingdom 1967

Main article: 1967 United Kingdom foot-and-mouth epidemic

In October 1967, a farmer in Shropshire reported a lame sow, which was later diagnosed with FMD. The
source was believed to be remains of legally-imported infected lamb from Argentina and Chile. The virus
spread and in total, 442,000 animals were slaughtered and the outbreak had an estimated cost of £370
million.

[edit] Taiwan 1997

Taiwan had previous epidemics of FMD in 1913-14 and 1924–29, but had since been spared
epidemics,[15] and considered itself free of FMD as late as in the 1990s. On the 19th of March 1997, a sow
at a farm in Hsinchu prefecture, Taiwan was diagnosed with a strain of FMD which only infects swine.
Mortality was high, nearing 100% in the infected herd. The cause of the epidemic was not determined, but

94
the farm was near a port city known for its pig-smuggling industry and illegal slaughterhouses. Smuggled
swine or contaminated meat are thus likely sources of the disease.

The disease spread fast among swine herds in Taiwan, with 200-300 new farms being infected daily.
Causes for this include the high swine density in the area, with up to 6,500 hogs per square mile, feeding
of pigs with untreated garbage and the farm's proximity to slaughterhouses. Other systemic issues such as
lack of laboratory facilities, slow response and initial lack of a vaccination program contributed. It is also
alleged that farmers intentionally introduced FMD to their flocks, because the payment offered to farmers
for culled swine was at a time higher than the market value of the swine.

A complicating factor is the endemic spread of swine vesicular disease (SVD) in Taiwan. The symptoms
are indistinguishable from FMD, which may have led to previous misdiagnosing of FMD as SVD.
Laboratory analysis was seldom used for diagnosis and FMD may thus have gone unnoticed for some
time in Taiwan.

The swine depopulation was a massive undertaking, with the military contributing substantial manpower.
At peak capacity, 200,000 hogs per day were disposed of, mainly by electrocution. Carcasses were
disposed of by burning and burial, but burning was avoided in water resource protection areas. In April,
industrial incinerators were running around the clock to dispose of the carcasses.

Initially, 40,000 combined vaccines for the strains O-1, A-24 and Asia-1 were available and administered
to zoo animals and valuable breeding hogs. At the end of March, half a million new doses of vaccines for
O-1 and Asia-1 were made available. On the 3rd of May, 13 million doses of O-1 vaccine arrived, and
both the March and May shipments were distributed free of charge. There was a danger of vaccination
crews spreading the disease; therefore, trained farmers were allowed to administer the vaccine under
veterinary supervision.

Taiwan had previously been the major exporter of pork to Japan, and among the top 15 pork producers in
the world in 1996. During the outbreak, over 3.8 million swine were destroyed at a cost of $6.9 billion
USD. The Taiwanese pig industry was devastated as a result, and the export market was in ruins.[13][16] In
2007, Taiwan was considered free of FMD, but were still conducting a vaccination program, which
restricts the export of meat from Taiwan.

[edit] United Kingdom 2001

Main article: 2001 United Kingdom foot-and-mouth crisis

The epidemic of foot-and-mouth disease in the United Kingdom in the spring and summer of 2001 was
caused by the "Type O pan Asia" strain of the disease.[17] This episode resulted in more than 2,000 cases
of the disease in farms throughout the British countryside. Around seven million sheep and cattle were
killed in an eventually successful attempt to halt the disease. The county of Cumbria was the worst
affected area of the country, with 843 cases. By the time the disease was halted by October 2001, the
crisis was estimated to have cost Britain £8bn ($16bn) in costs to the agricultural and support industries,
and to the outdoor industry. What made this outbreak so serious was the amount of time between
infection being present at the first outbreak loci, and the time when countermeasures were put into
operation against the disease, such as transport bans and detergent washing of both vehicles and personnel
entering livestock areas. However, the extreme overkill of many disease-free animals (80% of culled
livestock were clean) was a result of inappropriate poor mathematical modelling that did not reflect the
epidemiology of the epidemic.[18] The epidemic was probably caused by pigs which had been fed infected

95
garbage that had not been properly heat-sterilized. It is further believed that the garbage contained
remains of infected meat which had been illegally imported to Britain.[19]

[edit] China 2005

In April 2005, an Asia-1 strain of FMD appeared in the eastern provinces of Shandong and Jiangsu.
During April and May, it spread to suburban Beijing, the northern province of Hebei, and the Xinjiang
autonomous region in northwest China. On 13 May, China reported the FMD outbreak to the World
Health Organization and the OIE. This was the first time China has publicly admitted to having
FMD.[20][21] China is still reporting FMD outbreaks. In 2007, reports filed with the OIE documented new
or ongoing outbreaks in the provinces of Gansu, Qinghai and Xinjiang. This included reports of domestic
yak showing signs of infection.[22] FMD is endemic in pastoral regions of China from Heilongjiang
Province in the northeast to Sichuan Province and the Tibetan Autonomous region in the southwest.
Chinese domestic media reports often use a euphemism "Disease Number Five" 五号病 rather than FMD
in reports because of the sensitivity of the FMD issue. In March 2010, Southern Rural News (Nanfang
Nongcunbao) in an article "Breaking the Hoof and Mouth Disease Taboo" noted that FMD has long been
covered up in China by referring to it as disease number five.[23] FMD is also called canker 口疮 or hoof
jaundice 蹄癀 in China so information on FMD in China can be found online using those words as search
terms as well.[24][25] One can find online many provincial orders and regulations on FMD control predating
China's acknowledgment that the disease existed in China, for example Guangxi Zhuang Autonomous
Region 1991 regulation on preventing the spread of Disease No.5.[26]

[edit] United Kingdom 2007

Main article: 2007 United Kingdom foot-and-mouth outbreak

An infection of foot-and-mouth disease in the United Kingdom was confirmed by the Department for
Environment, Food and Rural Affairs, on 3 August 2007, on farmland located in Normandy, Surrey.[27][28]
All livestock in the vicinity were culled on 4 August. A nationwide ban on the movement of cattle and
pigs was imposed, with a 3 km (1.9 mile) protection zone placed around the outbreak sites and the nearby
virus research and vaccine production establishments, together with a 10 km (6.2 mile) increased
surveillance zone.[29]

On 4 August, the strain of the virus was identified as an "01 BFS67-like" virus, one linked to vaccines
and not normally found in animals, and isolated in the 1967 outbreak.[30] The same strain was used at the
nearby Institute for Animal Health and Merial Animal Health Ltd at Pirbright, 2½ miles (4 km) away
which is an American/French owned research facility, and was identified as a possible source of
infection.[31]

On 12 September, a new outbreak of the disease was confirmed in Egham, Surrey, 19 km (12 miles) from
the original outbreak,[32] with a second case being confirmed on a nearby farm on 14 September.[33]

These outbreaks caused a cull of all at-risk animals in the area surrounding Egham, including two farms
near to the famous 4-star Hotel Great Fosters.

These outbreaks also caused the closure of Windsor Great Park due to the park containing deer; the park
remained closed for three months.

96
On 19 September 2007, there was a suspected case of FMD in Solihull, where a temporary control zone
was set up by Defra.

[edit] Japan and South Korea 2010–2011

Main articles: 2010 Japan foot-and-mouth outbreak and 2010–2011 South Korea foot-and-mouth
outbreak

In April 2010, a report of three incursions of foot-and-mouth disease in Japan and South Korea led the
United Nations Food and Agriculture Organization (FAO) to issue a call for increased global surveillance.
Japan veterinary authorities confirmed an outbreak of type O FMD virus, currently more common in
Asian countries where FMD is endemic.

South Korea was hit by the rarer type A FMD in January and then suffered type O infection in April.[34]
The most serious case of foot-and-mouth outbreak in South Korea's history started in November 2010 in
pig farms in Andong city of Gyeongsangbuk-do, and has since spread in the country rapidly.[35][36] More
than 100 cases of the disease have been confirmed in the country so far,[35] and in January 2011, South
Korean officials started a mass cull of approximately 12 percent of the entire domestic pig population and
107,000 of three million cattle of the country to halt the outbreak.[35]

[edit] Bulgaria 2011

Main article: 2011 Bulgaria foot-and-mouth disease outbreak

The outbreak was recognised when a wild boar was shot, which had crossed the Bulgarian-Turkish border
near the village of Kosti, Burgas Province in the Strandzha Mountains.[37] The autopsy discovered foot-
and mouth disease.[37] After this 37 infected animals were discovered in the village of Kosti, and all
susceptible animals there were culled. Burgas Province and seven other neighbouring provinces declared
a quarantine.[38]

On 14 January a further outbreak was discovered in the neighbouring village of Rezovo.[37] It is thought to
have probably carried by a Turkish cattle herd. On 17 January the presence of the disease was
confirmed.[37] The Bulgarian authorities ordered culling of all susceptible livestock in Rezovo.[39]
Compensation for the losses in the two villages has been promised.[37]

[edit] Economic and ethical issues

Epidemics of FMD have resulted in the slaughter of millions of animals, despite this being a frequently
nonfatal disease for adult animals (2-5% mortality), though young animals can have a high mortality. The
Taiwan outbreak that only affected pigs also showed a high mortality for adults. The destruction of
animals is primarily to halt further spread, as growth and milk production may be permanently affected,
even in animals that have recovered. Due to international efforts to eradicate the disease, infection would
also lead to trade bans being imposed on affected countries. Critics of current policies to cull infected
herds argue that the financial imperative needs to be balanced against the killing of many animals,[40]
especially when a significant proportion of infected animals, most notably those producing milk, would
recover from infection and live normal lives, albeit with reduced milk production. On the ethical side, one
must also consider that FMD is a painful disease for the affected animals. [41] The vesicles/blisters are
painful in themselves, and restrict both eating and movement. Through ruptured blisters, the animal is at
risk from secondary bacterial infections [41] and, in some cases, permanent disability.

97
Hand, foot and mouth disease

From Wikipedia, the free encyclopedia

Jump to: navigation, search

Not to be confused with Foot-and-mouth disease.

Hand, foot and mouth disease

Classification and external resources

Typical lesions around the mouth of an 11 month old


male

ICD-10 B08.4

ICD-9 074.3

DiseasesDB 5622

MedlinePlus 000965

eMedicine derm/175

MeSH D006232

98
Hand, foot and mouth disease (HFMD) is a human syndrome caused by intestinal viruses of the
Picornaviridae family. The most common strains causing HFMD are Coxsackie A virus and Enterovirus
71 (EV-71).[1]

HFMD usually affects infants and children, and is quite common. It is moderately contagious and is
spread through direct contact with the mucus, saliva, or feces of an infected person. It typically occurs in
small epidemics in nursery schools or kindergartens, usually during the summer and autumn months. The
usual incubation period is 3–7 days.

It is uncommon in adults, but those with immune deficiencies are very susceptible. HFMD is not to be
confused with foot-and-mouth disease (also called hoof-and-mouth disease), which is a disease affecting
sheep, cattle, and swine, and which is unrelated to HFMD (but also caused by a member of the
Picornaviridae family).

Contents

[hide]

 1 Signs and symptoms


 2 Treatment
 3 Complications
 4 Outbreaks
o 4.1 1997
o 4.2 1998
o 4.3 2006
o 4.4 2007
o 4.5 2008
o 4.6 2009
o 4.7 2010
 5 References
 6 External links

[edit] Signs and symptoms

Rash on the hands.

99
Rash on the feet

Symptoms of HFMD include:[2]

 Fever
 Headache
 Vomiting
 Fatigue
 Malaise
 Referred ear pain
 Sore throat

 Painful oral lesions


 Non-itchy body rash, followed by sores with blisters on palms of hands and soles of feet
 Oral ulcer
 Sores or blisters may be present on the buttocks of small children and infants
 Irritability in infants and toddlers
 Loss of appetite.
 Diarrhea

The common incubation period (the time between infection and onset of symptoms) is from three to seven
days.

Early symptoms are likely to be fever often followed by a sore throat. Loss of appetite and general
malaise may also occur. Between one and two days after the onset of fever, painful sores (lesions) may
appear in the mouth and/or throat. A rash may become evident on the hands, feet, mouth, tongue, inside
of the cheeks, and occasionally the buttocks (but generally, the rash on the buttocks will be caused by the
diarrhea.)

[edit] Treatment

There is no specific treatment for hand, foot and mouth disease. Individual symptoms, such as fever and
pain from the sores, may be eased with the use of medication. HFMD is a viral disease that has to run its
course; many doctors do not issue medicine for this illness, unless the infection is severe. Infection in
older children, adolescents, and adults is normally very mild and lasts around 1 week or sometimes more.
Fever reducers will help to control high temperatures. Luke-warm baths will also help bring temperature
down.

100
Only a very small minority of sufferers require hospital admission, mainly as a result of neurological
complications (encephalitis, meningitis, or acute flaccid paralysis) or pulmonary edema/pulmonary
hemorrhage.

[edit] Complications

 Complications from the virus infections that cause HFMD are not common, but if they do occur,
medical care should be sought.
 Viral or aseptic meningitis can rarely occur with HFMD. Viral meningitis causes fever, headache,
stiff neck, or back pain. The condition is usually mild and clears without treatment; however,
some patients may need to be hospitalized for a short time.
 Other more serious diseases, such as encephalitis (swelling of the brain) or a polio-like paralysis,
result even more rarely. Encephalitis can be fatal.
 There have been reports of fingernail and toenail loss occurring mostly in children within 4 weeks
of their having hand, foot, and mouth disease (HFMD). At this time, it is not known whether the
reported nail loss is or is not a result of the infection. However, in the reports reviewed, the nail
loss has been temporary and nail growth resumed without medical treatment.

[3]

[edit] Outbreaks

[edit] 1997

Wikinews has related news: Highly contagious Hand, foot and mouth disease killing China's
children

 In 1997, 31 children died in an outbreak in the Malaysian state of Sarawak.[4]

[edit] 1998

 In 1998, there was an outbreak in Taiwan, affecting mainly children.[5] There were 405 severe
complications, and 78 children died.[6] The total number of cases in that epidemic is estimated to
have been 1.5 million.[6]

[edit] 2006

 In 2006, 7 people died in an outbreak in Kuching, Sarawak (according to the New Straits Times,
March 14).[4]
 In 2006, after an outbreak of Chikungunya in southern and some western parts of India, cases of
HFMD were reported. [7]

[edit] 2007

 The largest outbreak of HFMD in India occurred in 2007 in the eastern part of the country in
West Bengal. Authors found 38 cases of HFMD in and around Kolkata.[8]

101
[edit] 2008

 An outbreak in China, beginning in March in Fuyang, Anhui, led to 25,000 infections, and 42
deaths, by May 13.[9][10][11][12][13][14][15] Similar outbreaks were reported in Singapore (more than
2,600 cases as of April 20, 2008),[1] Vietnam (2,300 cases, 11 deaths),[16] Mongolia (1,600
cases),[17] and Brunei (1053 cases from June–August 2008)[18]

[edit] 2009

 17 children died in an outbreak during March and April 2009 in China's eastern Shandong
Province, and 18 children died in the neighboring Henan Province.[19] Out of 115,000 reported
cases in China from January to April, 773 were severe and 50 were fatal.[20]

 In Indonesia, where the disease is often called Singaporean influenza or flu Singapura,[21] the
disease was reported in the Jakarta area, starting with eight young children.[22] By late April,
health agencies in Jakarta were warning community health centers and advocating preventive
steps, including the use of thermal scanners in airports and avoiding travel to Singapore.[23]

[edit] 2010

 In China, an outbreak occurred in southern China's Guangxi Autonomous Region as well as


Guangdong, Henan, Hebei and Shandong provinces. Until March 70,756 children were infected
and 40 died from the disease.

Magnaporthe grisea

From Wikipedia, the free encyclopedia

Jump to: navigation, search

Magnaporthe grisea

A conidium and conidiogenous cell of M. grisea

Scientific classification

Kingdom: Fungi

102
Phylum: Ascomycota

Class: Sordariomycetes

Order: Magnaporthales

Family: Magnaporthaceae

Genus: Magnaporthe

Species: M. grisea

Binomial name

Magnaporthe grisea
(T.T. Hebert) M.E. Barr

Synonyms

Ceratosphaeria grisea T.T. Hebert, (1971)


Dactylaria grisea (Cooke) Shirai, (1910)
Dactylaria oryzae (Cavara) Sawada, (1917)
Phragmoporthe grisea (T.T. Hebert) M. Monod,
(1983)
Pyricularia grisea Sacc., (1880) (anamorph)
Pyricularia grisea (Cooke) Sacc., (1880)
Pyricularia oryzae Cavara, (1891)
Trichothecium griseum Cooke,
Trichothecium griseum Speg., (1882)
Contents

[hide]

 1 Summary
 2 Hosts and symptoms
 3 Disease cycle
 4 Environment
 5 Management
 6 Importance
 7 References
 8 Additional sources
 9 External links

103
[edit] Summary

Magnaporthe grisea, also known as rice blast fungus, rice rotten neck, rice seedling blight, blast of
rice, oval leaf spot of graminea, pitting disease, ryegrass blast, and Johnson spot,[1] is a plant-
pathogenic fungus that causes an important disease affecting rice. It is now known that M. grisea consists
of a cryptic species complex containing at least two biological species that have clear genetic differences
and do not interbreed.[2] Complex members isolated from Digitaria have been more narrowly defined as
M. grisea. The remaining members of the complex isolated from rice and a variety of other hosts have
been renamed Magnaporthe oryzae. Confusion on which of these two names to use for the rice blast
pathogen remains, as both are now used by different authors.

Members of the Magnaporthe grisea complex can also infect other agriculturally important cereals
including wheat, rye, barley, and pearl millet causing diseases called blast disease or blight disease. Rice
blast causes economically significant crop losses annually. Each year it is estimated to destroy enough
rice to feed more than 60 million people. The fungus is known to occur in 85 countries worldwide[3].

[edit] Hosts and symptoms

Lesions on rice leaves caused by infection with M. grisea

Rice blast lesions on plant nodes

104
M. grisea is an ascomycete fungus. It is an extremely effective plant pathogen as it can reproduce both
sexually and asexually to produce specialized infectious structures known as appressoria that infect aerial
tissues and hyphae that can infect root tissues.

Rice blast has been observed on rice strains M-201, M-202, M-204, M-205, M-103, M-104, S-102, L-
204, Calmochi-101, with M-201 being the most vulnerable.[4] Initial symptoms are white to gray-green
lesions or spots with darker borders produced on all parts of the shoot, while older lesions are elliptical or
spindle-shaped and whitish to gray with necrotic borders. Lesions may enlarge and coalesce to kill the
entire leaf. Symptoms are observed on all above-ground parts of the plant.[5] Lesions can be seen on the
leaf collar, culm, culm nodes, and panicle neck node. Internodal infection of the culm occurs in a banded
pattern. Nodal infection causes the culm to break at the infected node (rotten neck). [6] It also affects
reproduction by causing the host to produce fewer seeds. This is caused by the disease preventing
maturation of the actual grain.[7]

[edit] Disease cycle

Spores of M. grisea

The pathogen infects as a spore that produces lesions or spots on parts of the rice plant such as the leaf,
leaf collar, panicle, culm and culm nodes. Using a structure called an appressorium, the pathogen
penetrates the plant. M. grisea then sporulates from the diseased rice tissue to be dispersed as
conidiospores[8]. After overwintering in sources such as rice straw and stubble, the cycle repeats[3].

A single cycle can be completed in about a week under favorable conditions where one lesion can
generate up to thousands of spores in a single night. With the ability to continue to produce the spores for
over 20 days, rice blast lesions can be devastating to susceptible rice crops[9].

[edit] Environment

Rice blast is a significant problem in temperate regions and can be found in areas such as irrigated
lowland and upland[10]. Conditions conducive for rice blast include long periods of free moisture where
leaf wetness is required for infection and high humidity is common[10]. Sporulation increases with high

105
relative humidity and at 77-82 degrees F, spore germination, lesion formation, and sporulation are at
optimum levels[3].

In terms of control, excessive use of nitrogen fertilization as well as drought stress increase rice
susceptibility to the pathogen as the plant is placed in a weakened state and its defenses are low [3].
Extended drain periods also favor infection as they aerate the soil, converting ammonium to nitrate and
thus causing stress to rice crops, as well[3].

[edit] Management

The fungus has been able to establish resistance to both chemical treatments and genetic resistance in
some types of rice developed by plant breeders. It is thought that the fungus can achieve this by genetic
change through mutation. In order to most effectively control infection by M. grisea, an integrated
management program should be implemented to avoid overuse of a single control method and fight
against genetic resistance. For example, eliminating crop residue could reduce the occurrence of
overwintering and discourage inoculation in subsequent seasons. Another strategy would be to plant
resistant rice varieties that are not as susceptible to infection by M. grisea[3]. Knowledge of the
pathogenicity of M. grisea and its need for free moisture suggest other control strategies such as regulated
irrigation and a combination of chemical treatments with different modes of action [3]. Managing the
amount of water supplied to the crops limits spore mobility thus dampening the opportunity for infection.
Chemical controls such as Carpropamid have been shown to prevent penetration of the appressoria into
rice epidermal cells, leaving the grain unaffected[11].

[edit] Importance

Rice blast is the most important disease concerning the rice crop in the world. Since rice is an important
food source for much of the world, its effects have a broad range. It has been found in over 85 countries
across the world and reached the United States in 1996. Every year the amount of crops lost to rice blast
could feed 60 million people. Although there are some resistant strains of rice, the disease persists
wherever rice is grown. The disease has never been eradicated from a region.[12]

Poliomyelitis

From Wikipedia, the free encyclopedia

Jump to: navigation, search

"Polio" redirects here. For the virus, see Poliovirus.

Poliomyelitis

Classification and external resources

106
A man with an atrophied right leg due to poliomyelitis

ICD-10 A80., B91.

ICD-9 045, 138

DiseasesDB 10209

MedlinePlus 001402

eMedicine ped/1843 pmr/6

MeSH C02.182.600.700

Poliomyelitis, often called polio or infantile paralysis, is an acute viral infectious disease spread from
person to person, primarily via the fecal-oral route.[1] The term derives from the Greek poliós (πολιός),
meaning "grey", myelós (µυελός), referring to the "spinal cord", and the suffix -itis, which denotes
inflammation.[2]

Although around 90% of polio infections cause no symptoms at all, affected individuals can exhibit a
range of symptoms if the virus enters the blood stream.[3] In about 1% of cases the virus enters the central

107
nervous system, preferentially infecting and destroying motor neurons, leading to muscle weakness and
acute flaccid paralysis. Different types of paralysis may occur, depending on the nerves involved. Spinal
polio is the most common form, characterized by asymmetric paralysis that most often involves the legs.
Bulbar polio leads to weakness of muscles innervated by cranial nerves. Bulbospinal polio is a
combination of bulbar and spinal paralysis.[4]

Poliomyelitis was first recognized as a distinct condition by Jakob Heine in 1840.[5] Its causative agent,
poliovirus, was identified in 1908 by Karl Landsteiner.[5] Although major polio epidemics were unknown
before the late 19th century, polio was one of the most dreaded childhood diseases of the 20th century.
Polio epidemics have crippled thousands of people, mostly young children; the disease has caused
paralysis and death for much of human history. Polio had existed for thousands of years quietly as an
endemic pathogen until the 1880s, when major epidemics began to occur in Europe; soon after,
widespread epidemics appeared in the United States.[6]

By 1910, much of the world experienced a dramatic increase in polio cases and frequent epidemics
became regular events, primarily in cities during the summer months. These epidemics—which left
thousands of children and adults paralyzed—provided the impetus for a "Great Race" towards the
development of a vaccine. Developed in the 1950s, polio vaccines are credited with reducing the global
number of polio cases per year from many hundreds of thousands to around a thousand. [7] Enhanced
vaccination efforts led by the World Health Organization, UNICEF, and Rotary International could result
in global eradication of the disease.[8]

Contents

[hide]

 1 Classification
 2 Cause
 3 Transmission
 4 Pathophysiology
o 4.1 Paralytic polio
 4.1.1 Spinal polio
 4.1.2 Bulbar polio
 4.1.3 Bulbospinal polio
 5 Diagnosis
 6 Prevention
o 6.1 Passive immunization
o 6.2 Vaccine
 7 Treatment
 8 Prognosis
o 8.1 Recovery
o 8.2 Complications
o 8.3 Post-polio syndrome
 9 Eradication
 10 History
 11 See also
 12 Notes and references
 13 Further reading
 14 External links

108
[edit] Classification

Outcomes of poliovirus infection

Outcome Proportion of cases[4]

Asymptomatic 90–95%

Minor illness 4–8%

Non-paralytic aseptic
1–2%
meningitis

Paralytic poliomyelitis 0.1–0.5%

— Spinal polio 79% of paralytic cases

— Bulbospinal polio 19% of paralytic cases

— Bulbar polio 2% of paralytic cases

The term poliomyelitis is used to identify the disease caused by any of the three serotypes of poliovirus.
Two basic patterns of polio infection are described: a minor illness which does not involve the central
nervous system (CNS), sometimes called abortive poliomyelitis, and a major illness involving the CNS,
which may be paralytic or non-paralytic.[9] In most people with a normal immune system, a poliovirus
infection is asymptomatic. Rarely the infection produces minor symptoms; these may include upper
respiratory tract infection (sore throat and fever), gastrointestinal disturbances (nausea, vomiting,
abdominal pain, constipation or, rarely, diarrhea), and influenza-like illness.[4]

The virus enters the central nervous system in about 3% of infections. Most patients with CNS
involvement develop non-paralytic aseptic meningitis, with symptoms of headache, neck, back,
abdominal and extremity pain, fever, vomiting, lethargy and irritability.[2][10] Approximately 1 in 200 to 1
in 1000 cases progress to paralytic disease, in which the muscles become weak, floppy and poorly
controlled, and finally completely paralyzed; this condition is known as acute flaccid paralysis.[11]
Depending on the site of paralysis, paralytic poliomyelitis is classified as spinal, bulbar, or bulbospinal.
Encephalitis, an infection of the brain tissue itself, can occur in rare cases and is usually restricted to
infants. It is characterized by confusion, changes in mental status, headaches, fever, and less commonly
seizures and spastic paralysis.[12]

[edit] Cause

Main article: Poliovirus

109
A TEM micrograph of poliovirus

Poliomyelitis is caused by infection with a member of the genus Enterovirus known as poliovirus (PV).
This group of RNA viruses colonize the gastrointestinal tract[1] — specifically the oropharynx and the
intestine. The incubation time (to the first signs and symptoms) ranges from 3 to 35 days with a more
common span of 6 to 20 days[4]. PV infects and causes disease in humans alone.[3] Its structure is very
simple, composed of a single (+) sense RNA genome enclosed in a protein shell called a capsid.[3] In
addition to protecting the virus’s genetic material, the capsid proteins enable poliovirus to infect certain
types of cells. Three serotypes of poliovirus have been identified—poliovirus type 1 (PV1), type 2 (PV2),
and type 3 (PV3)—each with a slightly different capsid protein.[13] All three are extremely virulent and
produce the same disease symptoms.[3] PV1 is the most commonly encountered form, and the one most
closely associated with paralysis.[14]

Individuals who are exposed to the virus, either through infection or by immunization with polio vaccine,
develop immunity. In immune individuals, IgA antibodies against poliovirus are present in the tonsils and
gastrointestinal tract and are able to block virus replication; IgG and IgM antibodies against PV can
prevent the spread of the virus to motor neurons of the central nervous system.[15] Infection or vaccination
with one serotype of poliovirus does not provide immunity against the other serotypes, and full immunity
requires exposure to each serotype.[15]

A rare condition with a similar presentation, non-poliovirus poliomyelitis, may result from infections with
non-poliovirus enteroviruses.[16]

[edit] Transmission

Poliomyelitis is highly contagious via the oral-oral (oropharyngeal source) and fecal-oral (intestinal
source) routes.[15] In endemic areas, wild polioviruses can infect virtually the entire human population. [17]
It is seasonal in temperate climates, with peak transmission occurring in summer and autumn.[15] These
seasonal differences are far less pronounced in tropical areas.[17] The time between first exposure and first

110
symptoms, known as the incubation period, is usually 6 to 20 days, with a maximum range of 3 to
35 days.[18] Virus particles are excreted in the feces for several weeks following initial infection.[18] The
disease is transmitted primarily via the fecal-oral route, by ingesting contaminated food or water. It is
occasionally transmitted via the oral-oral route,[14] a mode especially visible in areas with good sanitation
and hygiene.[15] Polio is most infectious between 7–10 days before and 7–10 days after the appearance of
symptoms, but transmission is possible as long as the virus remains in the saliva or feces.[14]

Factors that increase the risk of polio infection or affect the severity of the disease include immune
deficiency,[19] malnutrition,[20] tonsillectomy,[21] physical activity immediately following the onset of
paralysis,[22] skeletal muscle injury due to injection of vaccines or therapeutic agents,[23] and pregnancy.[24]
Although the virus can cross the placenta during pregnancy, the fetus does not appear to be affected by
either maternal infection or polio vaccination.[25] Maternal antibodies also cross the placenta, providing
passive immunity that protects the infant from polio infection during the first few months of life.[26]

[edit] Pathophysiology

A blockage of the lumbar anterior spinal cord artery due to polio (PV3)

Poliovirus enters the body through the mouth, infecting the first cells it comes in contact with—the
pharynx (throat) and intestinal mucosa. It gains entry by binding to an immunoglobulin-like receptor,
known as the poliovirus receptor or CD155, on the cell membrane.[27] The virus then hijacks the host cell's
own machinery, and begins to replicate. Poliovirus divides within gastrointestinal cells for about a week,
from where it spreads to the tonsils (specifically the follicular dendritic cells residing within the tonsilar
germinal centers), the intestinal lymphoid tissue including the M cells of Peyer's patches, and the deep
cervical and mesenteric lymph nodes, where it multiplies abundantly. The virus is subsequently absorbed
into the bloodstream.[28]

Known as viremia, the presence of virus in the bloodstream enables it to be widely distributed throughout
the body. Poliovirus can survive and multiply within the blood and lymphatics for long periods of time,
sometimes as long as 17 weeks.[29] In a small percentage of cases, it can spread and replicate in other sites
such as brown fat, the reticuloendothelial tissues, and muscle.[30] This sustained replication causes a major
viremia, and leads to the development of minor influenza-like symptoms. Rarely, this may progress and
the virus may invade the central nervous system, provoking a local inflammatory response. In most cases
this causes a self-limiting inflammation of the meninges, the layers of tissue surrounding the brain, which
is known as non-paralytic aseptic meningitis.[2] Penetration of the CNS provides no known benefit to the
virus, and is quite possibly an incidental deviation of a normal gastrointestinal infection. [31] The
mechanisms by which poliovirus spreads to the CNS are poorly understood, but it appears to be primarily
a chance event—largely independent of the age, gender, or socioeconomic position of the individual.[31]

111
[edit] Paralytic polio

Denervation of skeletal muscle tissue secondary to poliovirus infection can lead to paralysis.

In around 1% of infections, poliovirus spreads along certain nerve fiber pathways, preferentially
replicating in and destroying motor neurons within the spinal cord, brain stem, or motor cortex. This leads
to the development of paralytic poliomyelitis, the various forms of which (spinal, bulbar, and bulbospinal)
vary only with the amount of neuronal damage and inflammation that occurs, and the region of the CNS
that is affected.

The destruction of neuronal cells produces lesions within the spinal ganglia; these may also occur in the
reticular formation, vestibular nuclei, cerebellar vermis, and deep cerebellar nuclei.[31] Inflammation
associated with nerve cell destruction often alters the color and appearance of the gray matter in the spinal
column, causing it to appear reddish and swollen.[2] Other destructive changes associated with paralytic
disease occur in the forebrain region, specifically the hypothalamus and thalamus.[31] The molecular
mechanisms by which poliovirus causes paralytic disease are poorly understood.

Early symptoms of paralytic polio include high fever, headache, stiffness in the back and neck,
asymmetrical weakness of various muscles, sensitivity to touch, difficulty swallowing, muscle pain, loss
of superficial and deep reflexes, paresthesia (pins and needles), irritability, constipation, or difficulty
urinating. Paralysis generally develops one to ten days after early symptoms begin, progresses for two to
three days, and is usually complete by the time the fever breaks.[32]

The likelihood of developing paralytic polio increases with age, as does the extent of paralysis. In
children, non-paralytic meningitis is the most likely consequence of CNS involvement, and paralysis
occurs in only 1 in 1000 cases. In adults, paralysis occurs in 1 in 75 cases.[33] In children under five years
of age, paralysis of one leg is most common; in adults, extensive paralysis of the chest and abdomen also
affecting all four limbs—quadriplegia—is more likely.[34] Paralysis rates also vary depending on the
serotype of the infecting poliovirus; the highest rates of paralysis (1 in 200) are associated with poliovirus
type 1, the lowest rates (1 in 2,000) are associated with type 2.[35]

112
[edit] Spinal polio

The location of motor neurons in the anterior horn cells of the spinal column.

Spinal polio is the most common form of paralytic poliomyelitis; it results from viral invasion of the
motor neurons of the anterior horn cells, or the ventral (front) gray matter section in the spinal column,
which are responsible for movement of the muscles, including those of the trunk, limbs and the intercostal
muscles.[11] Virus invasion causes inflammation of the nerve cells, leading to damage or destruction of
motor neuron ganglia. When spinal neurons die, Wallerian degeneration takes place, leading to weakness
of those muscles formerly innervated by the now dead neurons.[36] With the destruction of nerve cells, the
muscles no longer receive signals from the brain or spinal cord; without nerve stimulation, the muscles
atrophy, becoming weak, floppy and poorly controlled, and finally completely paralyzed. [11] Progression
to maximum paralysis is rapid (two to four days), and is usually associated with fever and muscle pain. [36]
Deep tendon reflexes are also affected, and are usually absent or diminished; sensation (the ability to feel)
in the paralyzed limbs, however, is not affected.[36]

The extent of spinal paralysis depends on the region of the cord affected, which may be cervical, thoracic,
or lumbar.[37] The virus may affect muscles on both sides of the body, but more often the paralysis is
asymmetrical.[28] Any limb or combination of limbs may be affected—one leg, one arm, or both legs and
both arms. Paralysis is often more severe proximally (where the limb joins the body) than distally (the
fingertips and toes).[28]

[edit] Bulbar polio

113
The location and anatomy of the bulbar region (in orange)

Making up about 2% of cases of paralytic polio, bulbar polio occurs when poliovirus invades and destroys
nerves within the bulbar region of the brain stem.[4] The bulbar region is a white matter pathway that
connects the cerebral cortex to the brain stem. The destruction of these nerves weakens the muscles
supplied by the cranial nerves, producing symptoms of encephalitis, and causes difficulty breathing,
speaking and swallowing.[10] Critical nerves affected are the glossopharyngeal nerve, which partially
controls swallowing and functions in the throat, tongue movement and taste; the vagus nerve, which sends
signals to the heart, intestines, and lungs; and the accessory nerve, which controls upper neck movement.
Due to the effect on swallowing, secretions of mucus may build up in the airway causing suffocation.[32]
Other signs and symptoms include facial weakness, caused by destruction of the trigeminal nerve and
facial nerve, which innervate the cheeks, tear ducts, gums, and muscles of the face, among other
structures; double vision; difficulty in chewing; and abnormal respiratory rate, depth, and rhythm, which
may lead to respiratory arrest. Pulmonary edema and shock are also possible, and may be fatal.[37]

[edit] Bulbospinal polio

Approximately 19% of all paralytic polio cases have both bulbar and spinal symptoms; this subtype is
called respiratory polio or bulbospinal polio.[4] Here, the virus affects the upper part of the cervical spinal
cord (C3 through C5), and paralysis of the diaphragm occurs. The critical nerves affected are the phrenic
nerve, which drives the diaphragm to inflate the lungs, and those that drive the muscles needed for
swallowing. By destroying these nerves this form of polio affects breathing, making it difficult or
impossible for the patient to breathe without the support of a ventilator. It can lead to paralysis of the arms
and legs and may also affect swallowing and heart functions.[38]

[edit] Diagnosis

Paralytic poliomyelitis may be clinically suspected in individuals experiencing acute onset of flaccid
paralysis in one or more limbs with decreased or absent tendon reflexes in the affected limbs that cannot
be attributed to another apparent cause, and without sensory or cognitive loss.[39]

A laboratory diagnosis is usually made based on recovery of poliovirus from a stool sample or a swab of
the pharynx. Antibodies to poliovirus can be diagnostic, and are generally detected in the blood of
infected patients early in the course of infection.[4] Analysis of the patient's cerebrospinal fluid (CSF),
which is collected by a lumbar puncture ("spinal tap"), reveals an increased number of white blood cells
(primarily lymphocytes) and a mildly elevated protein level. Detection of virus in the CSF is diagnostic of
paralytic polio, but rarely occurs.[4]

If poliovirus is isolated from a patient experiencing acute flaccid paralysis, it is further tested through
oligonucleotide mapping (genetic fingerprinting), or more recently by PCR amplification, to determine
whether it is "wild type" (that is, the virus encountered in nature) or "vaccine type" (derived from a strain
of poliovirus used to produce polio vaccine).[40] It is important to determine the source of the virus
because for each reported case of paralytic polio caused by wild poliovirus, it is estimated that another
200 to 3,000 contagious asymptomatic carriers exist.[41]

[edit] Prevention

[edit] Passive immunization

114
In 1950, William Hammon at the University of Pittsburgh purified the gamma globulin component of the
blood plasma of polio survivors.[42] Hammon proposed that the gamma globulin, which contained
antibodies to poliovirus, could be used to halt poliovirus infection, prevent disease, and reduce the
severity of disease in other patients who had contracted polio. The results of a large clinical trial were
promising; the gamma globulin was shown to be about 80% effective in preventing the development of
paralytic poliomyelitis.[43] It was also shown to reduce the severity of the disease in patients that
developed polio.[42] The gamma globulin approach was later deemed impractical for widespread use,
however, due in large part to the limited supply of blood plasma, and the medical community turned its
focus to the development of a polio vaccine.[44]

[edit] Vaccine

Main article: Polio vaccine

A child receives oral polio vaccine.

Two types of vaccine are used throughout the world to combat polio. Both types induce immunity to
polio, efficiently blocking person-to-person transmission of wild poliovirus, thereby protecting both
individual vaccine recipients and the wider community (so-called herd immunity).[45]

The first candidate polio vaccine, based on one serotype of a live but attenuated (weakened) virus, was
developed by the virologist Hilary Koprowski. Koprowski's prototype vaccine was given to an eight-year-
old boy on February 27, 1950.[46] Koprowski continued to work on the vaccine throughout the 1950s,
leading to large-scale trials in the then Belgian Congo and the vaccination of seven million children in
Poland against serotypes PV1 and PV3 between 1958 and 1960.[47]

The second inactivated virus vaccine was developed in 1952 by Jonas Salk, and announced to the world
on April 12, 1955.[48] The Salk vaccine, or inactivated poliovirus vaccine (IPV), is based on poliovirus
grown in a type of monkey kidney tissue culture (Vero cell line), which is chemically inactivated with
formalin.[15] After two doses of IPV (given by injection), 90% or more of individuals develop protective
antibody to all three serotypes of poliovirus, and at least 99% are immune to poliovirus following three
doses.[4]

115
Subsequently, Albert Sabin developed another live, oral polio vaccine (OPV). It was produced by the
repeated passage of the virus through non-human cells at sub-physiological temperatures.[49] The
attenuated poliovirus in the Sabin vaccine replicates very efficiently in the gut, the primary site of wild
poliovirus infection and replication, but the vaccine strain is unable to replicate efficiently within nervous
system tissue.[50] A single dose of Sabin's oral polio vaccine produces immunity to all three poliovirus
serotypes in approximately 50% of recipients. Three doses of live-attenuated OPV produce protective
antibody to all three poliovirus types in more than 95% of recipients. [4] Human trials of Sabin's vaccine
began in 1957,[51] and in 1958 it was selected, in competition with the live vaccines of Koprowski and
other researchers, by the US National Institutes of Health.[47] It was licensed in 1962[51] and rapidly
became the only polio vaccine used worldwide.[47]

Because OPV is inexpensive, easy to administer, and produces excellent immunity in the intestine (which
helps prevent infection with wild virus in areas where it is endemic), it has been the vaccine of choice for
controlling poliomyelitis in many countries.[52] On very rare occasions (about 1 case per 750,000 vaccine
recipients) the attenuated virus in OPV reverts into a form that can paralyze. [18] Most industrialized
countries have switched to IPV, which cannot revert, either as the sole vaccine against poliomyelitis or in
combination with oral polio vaccine.[53]

[edit] Treatment

A modern negative pressure ventilator (iron lung)

There is no cure for polio. The focus of modern treatment has been on providing relief of symptoms,
speeding recovery and preventing complications. Supportive measures include antibiotics to prevent
infections in weakened muscles, analgesics for pain, moderate exercise and a nutritious diet.[54] Treatment
of polio often requires long-term rehabilitation, including physical therapy, braces, corrective shoes and,
in some cases, orthopedic surgery.[37]

Portable ventilators may be required to support breathing. Historically, a noninvasive negative-pressure


ventilator, more commonly called an iron lung, was used to artificially maintain respiration during an
acute polio infection until a person could breathe independently (generally about one to two weeks).
Today many polio survivors with permanent respiratory paralysis use modern jacket-type negative-
pressure ventilators that are worn over the chest and abdomen.[55]

Other historical treatments for polio include hydrotherapy, electrotherapy, massage and passive motion
exercises, and surgical treatments such as tendon lengthening and nerve grafting. [11] Devices such as rigid
braces and body casts—which tended to cause muscle atrophy due to the limited movement of the user—
were also touted as effective treatments.[56]

116
[edit] Prognosis

Patients with abortive polio infections recover completely. In those that develop only aseptic meningitis,
the symptoms can be expected to persist for two to ten days, followed by complete recovery. [57] In cases
of spinal polio, if the affected nerve cells are completely destroyed, paralysis will be permanent; cells that
are not destroyed but lose function temporarily may recover within four to six weeks after onset. [57] Half
the patients with spinal polio recover fully; one quarter recover with mild disability and the remaining
quarter are left with severe disability.[58] The degree of both acute paralysis and residual paralysis is likely
to be proportional to the degree of viremia, and inversely proportional to the degree of immunity.[31]
Spinal polio is rarely fatal.[32]

A child with a deformity of her right leg due to polio

Without respiratory support, consequences of poliomyelitis with respiratory involvement include


suffocation or pneumonia from aspiration of secretions.[55] Overall, 5–10% of patients with paralytic polio
die due to the paralysis of muscles used for breathing. The mortality rate varies by age: 2–5% of children
and up to 15–30% of adults die.[4] Bulbar polio often causes death if respiratory support is not
provided;[38] with support, its mortality rate ranges from 25 to 75%, depending on the age of the
patient.[4][59] When positive pressure ventilators are available, the mortality can be reduced to 15%.[60]

[edit] Recovery

Many cases of poliomyelitis result in only temporary paralysis.[11] Nerve impulses return to the formerly
paralyzed muscle within a month, and recovery is usually complete in six to eight months. [57] The
neurophysiological processes involved in recovery following acute paralytic poliomyelitis are quite
effective; muscles are able to retain normal strength even if half the original motor neurons have been

117
lost.[61] Paralysis remaining after one year is likely to be permanent, although modest recoveries of muscle
strength are possible 12 to 18 months after infection.[57]

One mechanism involved in recovery is nerve terminal sprouting, in which remaining brainstem and
spinal cord motor neurons develop new branches, or axonal sprouts.[62] These sprouts can reinnervate
orphaned muscle fibers that have been denervated by acute polio infection,[63] restoring the fibers'
capacity to contract and improving strength.[64] Terminal sprouting may generate a few significantly
enlarged motor neurons doing work previously performed by as many as four or five units: [33] a single
motor neuron that once controlled 200 muscle cells might control 800 to 1000 cells. Other mechanisms
that occur during the rehabilitation phase, and contribute to muscle strength restoration, include myofiber
hypertrophy—enlargement of muscle fibers through exercise and activity—and transformation of type II
muscle fibers to type I muscle fibers.[63][65]

In addition to these physiological processes, the body possesses a number of compensatory mechanisms
to maintain function in the presence of residual paralysis. These include the use of weaker muscles at a
higher than usual intensity relative to the muscle's maximal capacity, enhancing athletic development of
previously little-used muscles, and using ligaments for stability, which enables greater mobility.[65]

[edit] Complications

Residual complications of paralytic polio often occur following the initial recovery process. [10] Muscle
paresis and paralysis can sometimes result in skeletal deformities, tightening of the joints and movement
disability. Once the muscles in the limb become flaccid, they may interfere with the function of other
muscles. A typical manifestation of this problem is equinus foot (similar to club foot). This deformity
develops when the muscles that pull the toes downward are working, but those that pull it upward are not,
and the foot naturally tends to drop toward the ground. If the problem is left untreated, the Achilles
tendons at the back of the foot retract and the foot cannot take on a normal position. Polio victims that
develop equinus foot cannot walk properly because they cannot put their heel on the ground. A similar
situation can develop if the arms become paralyzed.[66] In some cases the growth of an affected leg is
slowed by polio, while the other leg continues to grow normally. The result is that one leg is shorter than
the other and the person limps and leans to one side, in turn leading to deformities of the spine (such as
scoliosis).[66] Osteoporosis and increased likelihood of bone fractures may occur. Extended use of braces
or wheelchairs may cause compression neuropathy, as well as a loss of proper function of the veins in the
legs, due to pooling of blood in paralyzed lower limbs.[38][67] Complications from prolonged immobility
involving the lungs, kidneys and heart include pulmonary edema, aspiration pneumonia, urinary tract
infections, kidney stones, paralytic ileus, myocarditis and cor pulmonale.[38][67]

[edit] Post-polio syndrome

Main article: Post-polio syndrome

Around a quarter of individuals who survive paralytic polio in childhood develop additional symptoms
decades after recovering from the acute infection, notably muscle weakness, extreme fatigue, or paralysis.
This condition is known as post-polio syndrome (PPS) or post-polio sequelae.[68] The symptoms of PPS
are thought to involve a failure of the over-sized motor units created during recovery from paralytic
disease.[69][70] Factors that increase the risk of PPS include the length of time since acute poliovirus
infection, the presence of permanent residual impairment after recovery from the acute illness, and both
overuse and disuse of neurons.[68] Post-polio syndrome is not an infectious process, and persons
experiencing the syndrome do not shed poliovirus.[4]

118
[edit] Eradication

Disability-adjusted life year for poliomyelitis per 100,000 inhabitants.

no data

≤ 0.35

0.35-0.7

0.7-1.05

1.05-1.4

1.4-1.75

1.75-2.1

2.1-2.45

2.45-2.8

2.8-3.15

3.15-3.5

3.5-3.85

≥ 3.85

WHO 2002

Main article: Poliomyelitis eradication

While now rare in the Western world, polio is still endemic to South Asia and Nigeria. Following the
widespread use of poliovirus vaccine in the mid-1950s, the incidence of poliomyelitis declined
dramatically in many industrialized countries. A global effort to eradicate polio began in 1988, led by the
World Health Organization, UNICEF, and The Rotary Foundation.[71] These efforts have reduced the
number of annual diagnosed cases by 99%; from an estimated 350,000 cases in 1988 to a low of 483
cases in 2001, after which it has remained at a level of about 1,000 cases per year (1,606 in 2009).[72][73][74]
Polio is one of only two diseases currently the subject of a global eradication program, the other being

119
Guinea worm disease. If the global Polio Eradication initiative is successful before that for Guinea worm
or any other disease, it would be only the third time humankind has ever completely eradicated a disease,
after smallpox in 1979[75] and rinderpest in 2010.[76] A number of eradication milestones have already
been reached, and several regions of the world have been certified polio-free. The Americas were
declared polio-free in 1994.[77] In 2000 polio was officially eliminated in 36 Western Pacific countries,
including China and Australia.[78][79] Europe was declared polio-free in 2002.[80] As of 2006, polio remains
endemic in only four countries: Nigeria, India (specifically Uttar Pradesh and Bihar), Pakistan, and
Afghanistan, [72][81] although it continues to cause epidemics in other nearby countries born of hidden or
reestablished transmission.[82]

Much of the work towards eradication was documented by Brazilian photographer Sebastião Salgado, as
a UNICEF Goodwill Ambassador, in the book The End of Polio: Global Effort to End a Disease.[83]

[edit] History

Main article: History of poliomyelitis

An Egyptian stele thought to represent a polio victim, 18th Dynasty (1403–1365 BC)

The effects of polio have been known since prehistory; Egyptian paintings and carvings depict otherwise
healthy people with withered limbs, and children walking with canes at a young age.[5] The first clinical
description was provided by the English physician Michael Underwood in 1789, where he refers to polio
as "a debility of the lower extremities".[84] The work of physicians Jakob Heine in 1840 and Karl Oskar
Medin in 1890 led to it being known as Heine-Medin disease.[85] The disease was later called infantile
paralysis, based on its propensity to affect children.

Before the 20th century, polio infections were rarely seen in infants before six months of age, most cases
occurring in children six months to four years of age.[86] Poorer sanitation of the time resulted in a
constant exposure to the virus, which enhanced a natural immunity within the population. In developed

120
countries during the late 19th and early 20th centuries, improvements were made in community
sanitation, including better sewage disposal and clean water supplies. These changes drastically increased
the proportion of children and adults at risk of paralytic polio infection, by reducing childhood exposure
and immunity to the disease.

Small localized paralytic polio epidemics began to appear in Europe and the United States around 1900.[6]
Outbreaks reached pandemic proportions in Europe, North America, Australia, and New Zealand during
the first half of the 20th century. By 1950 the peak age incidence of paralytic poliomyelitis in the United
States had shifted from infants to children aged five to nine years, when the risk of paralysis is greater;
about one-third of the cases were reported in persons over 15 years of age.[87] Accordingly, the rate of
paralysis and death due to polio infection also increased during this time.[6] In the United States, the 1952
polio epidemic became the worst outbreak in the nation's history. Of nearly 58,000 cases reported that
year 3,145 died and 21,269 were left with mild to disabling paralysis.[88] Intensive-care medicine has its
origin in the fight against polio. Most hospitals in the 1950s had limited access to iron lungs for patients
unable to breathe without mechanical assistance. The establishment of respiratory centers to assist the
most severe polio patients, was hence the harbinger of subsequent ICUs.[89]

The polio epidemics changed not only the lives of those who survived them, but also affected profound
cultural changes; spurring grassroots fund-raising campaigns that would revolutionize medical
philanthropy, and give rise to the modern field of rehabilitation therapy. As one of the largest disabled
groups in the world, polio survivors also helped to advance the modern disability rights movement
through campaigns for the social and civil rights of the disabled. The World Health Organization
estimates that there are 10 to 20 million polio survivors worldwide.[90] In 1977 there were 254,000
persons living in the United States who had been paralyzed by polio.[91] According to doctors and local
polio support groups, some 40,000 polio survivors with varying degrees of paralysis live in Germany,
30,000 in Japan, 24,000 in France, 16,000 in Australia, 12,000 in Canada and 12,000 in the United
Kingdom.[90] Many notable individuals have survived polio and often credit the prolonged immobility and
residual paralysis associated with polio as a driving force in their lives and careers.[92]

The disease was very well publicized during the polio epidemics of the 1950s, with extensive media
coverage of any scientific advancements that might lead to a cure. Thus, the scientists working on polio
became some of the most famous of the century. Fifteen scientists and two laymen who made important
contributions to the knowledge and treatment of poliomyelitis are honored by the Polio Hall of Fame,
which was dedicated in 1957 at the Roosevelt Warm Springs Institute for Rehabilitation in Warm
Springs, Georgia, USA. In 2008 four organizations (Rotary International, the World Health Organization,
the U.S. Centers for Disease Control and UNICEF) were added to the Hall of Fame.

Rabies

From Wikipedia, the free encyclopedia

Jump to: navigation, search

This article is about the disease. For the virus, see Rabies virus. For other uses, see Rabies
(disambiguation).

Rabies

121
Classification and external resources

Dog with rabies virus

ICD-10 A82.

DiseasesDB 11148

eMedicine med/1374 eerg/493 ped/1974

MeSH D011818

Rabies (pronounced /ˈreɪbiːz/. From Latin: rabies) is a viral disease that causes acute encephalitis
(inflammation of the brain) in warm-blooded animals.[1] It is zoonotic (i.e., transmitted by animals), most
commonly by a bite from an infected animal but occasionally by other forms of contact. Rabies is almost
invariably fatal if post-exposure prophylaxis is not administered prior to the onset of severe symptoms.
The rabies virus infects the central nervous system, ultimately causing disease in the brain and death. The
early symptoms of rabies in people are similar to that of many other illnesses, including fever, headache,
and general weakness or discomfort. As the disease progresses, more specific symptoms appear and may
include insomnia, anxiety, confusion, slight or partial paralysis, excitation, hallucinations, agitation,
hypersalivation (increase in saliva), difficulty swallowing, and hydrophobia (fear of water). Death usually
occurs within days of the onset of these symptoms.

The rabies virus travels to the brain by following the peripheral nerves. The incubation period of the
disease is usually a few months in humans, depending on the distance the virus must travel to reach the

122
central nervous system.[2] Once the rabies virus reaches the central nervous system and symptoms begin
to show, the infection is effectively untreatable and usually fatal within days.

Early-stage symptoms of rabies are malaise, headache and fever, progressing to acute pain, violent
movements, uncontrolled excitement, depression, and hydrophobia.[1] Finally, the patient may experience
periods of mania and lethargy, eventually leading to coma. The primary cause of death is usually
respiratory insufficiency.[2] Worldwide, the vast majority of human rabies cases (approximately 97%)
come from dog bites.[3] In the United States, however, animal control and vaccination programs have
effectively eliminated domestic dogs as reservoirs of rabies.[4] In several countries, including the United
Kingdom, Estonia and Japan, rabies carried by animals that live on the ground has been eradicated
entirely. Concerns exist about airborne and mixed-habitat animals including bats. Bats in the U.K. and in
some other countries carry European Bat Lyssavirus 1 and European Bat Lyssavirus 2. The symptoms of
these viruses are similar to those of rabies and so the viruses are both known as bat rabies. An
unvaccinated Scottish bat handler died from an EBLV infection in 2002.[2]

The economic impact is also substantial, as rabies is a significant cause of death of livestock in some
countries.

Contents

[hide]

 1 Signs and symptoms


 2 Virology
 3 Diagnosis
 4 Prevention
 5 Management
o 5.1 Post-exposure prophylaxis
o 5.2 Blood-brain barrier
o 5.3 Induced coma
 6 Prognosis
 7 Epidemiology
o 7.1 Transmission
o 7.2 Prevalence
 8 History
o 8.1 Etymology
o 8.2 Impact
 9 In other animals
 10 See also
 11 References
 12 External links

[edit] Signs and symptoms

123
Patient with rabies, 1959

The period between infection and the first flu-like symptoms is normally two to twelve weeks, but can be
as long as two years. Soon after, the symptoms expand to slight or partial paralysis, cerebral dysfunction,
anxiety, insomnia, confusion, agitation, abnormal behavior, paranoia, terror, hallucinations, progressing to
delirium.[2][5] The production of large quantities of saliva and tears coupled with an inability to speak or
swallow are typical during the later stages of the disease; this can result in hydrophobia, in which the
patient has difficulty swallowing because the throat and jaw become slowly paralyzed, shows panic when
presented with liquids to drink, and cannot quench his or her thirst.

Death almost invariably results two to ten days after first symptoms; the few humans who are known to
have survived the disease were all left with severe brain damage.[6] In 2005, the first patient was treated
with the Milwaukee protocol,[7] and an intention to treat analysis has since found that this protocol has a
survival rate of about 8%.[8]

[edit] Virology

Main article: Rabies virus

TEM micrograph with numerous rabies virions (small, dark grey, rodlike particles) and Negri bodies (the
larger pathognomonic cellular inclusions of rabies infection).

124
The rabies virus is the type species of the Lyssavirus genus, in the family Rhabdoviridae, order
Mononegavirales. Lyssaviruses have helical symmetry, with a length of about 180 nm and a cross-
sectional diameter of about 75 nm.[1] These viruses are enveloped and have a single stranded RNA
genome with negative-sense. The genetic information is packaged as a ribonucleoprotein complex in
which RNA is tightly bound by the viral nucleoprotein. The RNA genome of the virus encodes five genes
whose order is highly conserved: nucleoprotein (N), phosphoprotein (P), matrix protein (M), glycoprotein
(G) and the viral RNA polymerase (L).[9]

From the point of entry, the virus is neurotropic, traveling quickly along the neural pathways into the
central nervous system (CNS), and then further into other organs.[2] The salivary glands receive high
concentrations of the virus thus allowing further transmission.

[edit] Diagnosis

The reference method for diagnosing rabies is by performing PCR or viral culture on brain samples taken
after death. The diagnosis can also be reliably made from skin samples taken before death. [10] It is also
possible to make the diagnosis from saliva, urine and cerebrospinal fluid samples, but this is not as
sensitive. Inclusion bodies called Negri bodies are 100% diagnostic for rabies infection, but are found in
only about 80% of cases.[1] If possible, the animal from which the bite was received should also be
examined for rabies.[11]

The differential diagnosis in a case of suspected human rabies may initially include any cause of
encephalitis, particularly infection with viruses such as herpesviruses, enteroviruses, and arboviruses
(e.g., West Nile virus). The most important viruses to rule out are herpes simplex virus type 1, varicella-
zoster virus, and (less commonly) enteroviruses, including coxsackieviruses, echoviruses, polioviruses,
and human enteroviruses 68 to 71.[12] In addition, consideration should be given to the local epidemiology
of encephalitis caused by arboviruses belonging to several taxonomic groups, including eastern and
western equine encephalitis viruses, St. Louis encephalitis virus, Powassan virus, the California
encephalitis virus serogroup, and La Crosse virus.[citation needed]

New causes of viral encephalitis are also possible, as was evidenced by the recent outbreak in Malaysia of
some 300 cases of encephalitis (mortality rate, 40%) caused by Nipah virus, a newly recognized
paramyxovirus.[13] Similarly, well-known viruses may be introduced into new locations, as is illustrated
by the recent outbreak of encephalitis due to West Nile virus in the eastern United States.[14]
Epidemiologic factors (e.g., season, geographic location, and the patient’s age, travel history, and possible
exposure to animal bites, rodents, and ticks) may help direct the diagnostic workup.

Cheaper rabies diagnosis will be possible for low-income settings: accurate rabies diagnosis can be done
at a tenth of the cost of traditional testing using basic light microscopy techniques.[15]

[edit] Prevention

Main article: Rabies vaccine

All human cases of rabies were fatal until a vaccine was developed in 1885 by Louis Pasteur and Émile
Roux. Their original vaccine was harvested from infected rabbits, from which the virus in the nerve tissue
was weakened by allowing it to dry for five to ten days.[16] Similar nerve tissue-derived vaccines are still
used in some countries, as they are much cheaper than modern cell culture vaccines.[17] The human
diploid cell rabies vaccine was started in 1967; however, a new and less expensive purified chicken
embryo cell vaccine and purified vero cell rabies vaccine are now available.[11] A recombinant vaccine

125
called V-RG has been successfully used in Belgium, France, Germany and the United States to prevent
outbreaks of rabies in wildlife.[18] Currently pre-exposure immunization has been used in both human and
non-human populations, whereas in many jurisdictions domesticated animals are required to be
vaccinated.[19]

In the U.S., since the widespread vaccination of domestic dogs and cats and the development of effective
human vaccines and immunoglobulin treatments, the number of recorded deaths from rabies has dropped
from one hundred or more annually in the early twentieth century, to 1–2 per year, mostly caused by bat
bites, which may go unnoticed by the victim and hence untreated.[4]

September 28 is World Rabies Day, which promotes information on, and prevention and elimination of
the disease.[20]

[edit] Management

[edit] Post-exposure prophylaxis

Treatment after exposure, known as post-exposure prophylaxis (PEP), is highly successful in preventing
the disease if administered promptly, generally within ten days of infection.[1] Thoroughly washing the
wound as soon as possible with soap and water for approximately five minutes is very effective at
reducing the number of viral particles. “If available, a virucidal antiseptic such as povidone-iodine, iodine
tincture, aqueous iodine solution, or alcohol (ethanol) should be applied after washing. Exposed mucous
membranes such as eyes, nose or mouth should be flushed well with water.”[21]

In the United States, the Centers for Disease Control and Prevention (CDC) recommend patients receive
one dose of human rabies immunoglobulin (HRIG) and four doses of rabies vaccine over a fourteen day
period. The immunoglobulin dose should not exceed 20 units per kilogram body weight. HRIG is very
expensive and constitutes the vast majority of the cost of post-exposure treatment, ranging as high as
several thousand dollars. As much as possible of this dose should be infiltrated around the bites, with the
remainder being given by deep intramuscular injection at a site distant from the vaccination site. [22] The
first dose of rabies vaccine is given as soon as possible after exposure, with additional doses on days
three, seven and fourteen after the first. Patients who have previously received pre-exposure vaccination
do not receive the immunoglobulin, only the post-exposure vaccinations on day 0 and 2.

Modern cell-based vaccines are similar to flu shots in terms of pain and side effects. The old nerve-tissue-
based vaccinations that require multiple painful injections into the abdomen with a large needle are cheap,
but are being phased out and replaced by affordable WHO ID (intradermal) vaccination regimens.[11]

Intramuscular vaccination should be given into the deltoid, not gluteal area which has been associated
with vaccination failure due to injection into fat rather than muscle. In infants the lateral thigh is used as
for routine childhood vaccinations.

An individual awakening to find a bat in the room, or finding a bat in the room of a previously unattended
child or mentally disabled or intoxicated person is regarded as an indication for post-exposure
prophylaxis. The recommendation for the precautionary use of post-exposure prophylaxis in occult bat
encounters where there is no recognized contact has been questioned in the medical literature based on a
cost-benefit analysis.[23] However, recent studies have further confirmed the wisdom of maintaining the
current protocol of precautionary administering of PEP. In cases where a child or mentally compromised
individual has been left alone with a bat, especially in sleep areas (where a bite/or exposure may occur
while the victim is asleep and unaware or awake and unaware that a bite occurred). This is illustrated by

126
the September 2000 case of a nine-year old boy from Quebec who died from rabies 3 weeks after being in
the presence of a sick bat, even though there was no apparent report of a bite; as shown in the following
conclusion made by the doctors involved in the case:

Despite recent criticism (45), the dramatic circumstances surrounding our patient's history, as well as
increasingly frequent reports of human rabies contracted in North America, support the current Canadian
guidelines which state that RPEP [PEP] is appropriate in cases where a significant contact with a bat
cannot be excluded (45). The notion that a bite or an overt break in the skin needs to be seen or felt for
rabies to be transmitted by a bat is a myth in many cases.[24]

It is highly recommended that PEP be administered as soon as possible. Begun with little or no delay,
PEP is 100% effective against rabies.[7] In the case in which there has been a significant delay in
administering PEP, the treatment should be administered regardless of that delay, as it may still be
effective.[22] If there has been a delay between exposure and attempts at treatment, such that the possibility
exists that the virus has already penetrated the nervous system, the possibility exists that amputation of
the affected limb might thwart rabies, if the bite or exposure was on an arm or leg. This treatment should
be combined with an intensive PEP regimen.[citation needed]

[edit] Blood-brain barrier

Some recent work has shown that during lethal rabies infection, the blood-brain barrier (BBB) does not
allow anti-viral immune cells to enter the brain, the primary site of rabies virus replication. [25] This aspect
contributes to the pathogenicity of the virus and artificially increasing BBB permeability promotes viral
clearance.[26] Opening the BBB during rabies infection has been suggested as a possible novel approach to
treating the disease, even though no attempts have yet been made to determine whether or not this
treatment could be successful.[citation needed]

[edit] Induced coma

See also: Milwaukee protocol

In 2005, American teenager Jeanna Giese survived an infection of rabies unvaccinated. She was placed
into an induced coma upon onset of symptoms and given ketamine, midazolam, ribavirin, and
amantadine. Her doctors administered treatment based on the hypothesis that detrimental effects of rabies
were caused by temporary dysfunctions in the brain and could be avoided by inducing a temporary partial
halt in brain function that would protect the brain from damage while giving the immune system time to
defeat the virus. After thirty-one days of isolation and seventy-six days of hospitalization, Giese was
released from the hospital.[27] She survived with almost no permanent sequelae and as of 2009 was
starting her third year of university studies.[28]

Giese's treatment regimen became known as the "Milwaukee protocol", which has since undergone
revision (the second version omits the use of ribavirin). There were 2 survivors out of 25 patients treated
under the first protocol. A further 10 patients have been treated under the revised protocol and there have
been a further 2 survivors.[29] The anesthetic drug ketamine has shown the potential for rabies virus
inhibition in rats,[30] and is used as part of the Milwaukee protocol.

On April 10, 2008 in Cali, Colombia, an eleven year-old boy was reported to survive rabies and the
induced coma without noticeable brain damage.[31]

[edit] Prognosis

127
In unvaccinated humans, rabies is almost always fatal after neurological symptoms have developed, but
prompt post-exposure vaccination may prevent the virus from progressing. Rabies kills around 55,000
people a year, mostly in Asia and Africa.[32] There are only six known cases of a person surviving
symptomatic rabies, and only one known case of survival in which the patient received no rabies-specific
treatment either before or after illness onset.[33][34][35]

The most current survival data using the Milwaukee protocol is available from the rabies registry.[36]

[edit] Epidemiology

Rabies-free countries as of 2010

[edit] Transmission

Main article: Rabies transmission

Any warm-blooded animal (including humans) may become infected with the rabies virus and develop
symptoms (although birds have only been known to be experimentally infected[37]). Indeed the virus has
even been adapted to grow in cells of poikilothermic vertebrates.[38][39] Most animals can be infected by
the virus and can transmit the disease to humans. Infected bats, monkeys, raccoons, foxes, skunks, cattle,
wolves, coyotes, dogs, mongoose (normally yellow mongoose)[40] or cats present the greatest risk to
humans. Rabies may also spread through exposure to infected domestic farm animals, groundhogs,
weasels, bears and other wild carnivores. Small rodents such as squirrels, hamsters, guinea pigs, gerbils,
chipmunks, rats, and mice and lagomorphs like rabbits and hares are almost never found to be infected
with rabies and are not known to transmit rabies to humans.[41]

The virus is usually present in the nerves and saliva of a symptomatic rabid animal.[42][43] The route of
infection is usually, but not always, by a bite. In many cases the infected animal is exceptionally
aggressive, may attack without provocation, and exhibits otherwise uncharacteristic behavior.[44]

Transmission between humans is extremely rare. A few cases have been recorded through transplant
surgery.[45]

After a typical human infection by bite, the virus enters the peripheral nervous system. It then travels
along the nerves towards the central nervous system.[46] During this phase, the virus cannot be easily
detected within the host, and vaccination may still confer cell-mediated immunity to prevent symptomatic
rabies. When the virus reaches the brain, it rapidly causes encephalitis. This is called the prodromal
phase, and is the beginning of the symptoms. Once the patient becomes symptomatic, treatment is almost
never effective and mortality is over 99%. Rabies may also inflame the spinal cord producing transverse
myelitis.[47][48]

128
[edit] Prevalence

Main article: Prevalence of rabies

The rabies virus survives in widespread, varied, rural fauna reservoirs. It is present in the animal
populations of almost every country in the world, except in Australia and New Zealand. [49] In some
countries like those in western Europe and Oceania, rabies is considered to be prevalent among bat
populations only.

In Asia, parts of the Americas and large parts of Africa, dogs remain the principal host. Mandatory
vaccination of animals is less effective in rural areas. Especially in developing countries, pets may not be
privately kept and their destruction may be unacceptable. Oral vaccines can be safely distributed in baits,
and this has successfully reduced rabies in rural areas of France, Ontario, Texas, Florida and elsewhere,
like the City of Montréal, Québec, where baits are successfully used on raccoons in the Mont-Royal park
area. Vaccination campaigns may be expensive, and cost-benefit analysis suggests that baits may be a
cost effective method of control.[50]

There are an estimated 55,000 human deaths annually from rabies worldwide, with about 31,000 in Asia,
and 24,000 in Africa.[32] One of the sources of recent flourishing of rabies in East Asia is the pet boom.
China introduced the “one-dog policy” in the city of Beijing in November 2006 to control the problem.[51]
India has been reported as having the highest rate of human rabies in the world, primarily because of stray
dogs.[52] As of 2007, Vietnam had the second-highest rate, followed by Thailand; in these countries too
the virus is primarily transmitted through canines (feral dogs and other wild canine species). Recent
reports suggest that wild rabid dogs are roaming the streets. Because much cheaper pre-vaccination is not
commonly administered in places like Thailand, the expense for lack of preparation with far more costly
post-exposure prophylaxis can hit families hard.[53]

Rabies was once rare in the United States outside the Southern states[citation needed], but as of 2006, raccoons
in the mid-Atlantic and northeast United States had been suffering from a rabies epidemic since the
1970s, which was moving westwards into Ohio.[54] In the midwestern United States, skunks are the
primary carriers of rabies, comprising 134 of the 237 documented non-human cases in 1996.

[edit] History

[edit] Etymology

The term is derived from the Latin rabies, "madness".[55] This, in turn, may be related to the Sanskrit
rabhas, "to do violence". The Greeks derived the word "lyssa", from "lud" or "violent"; this root is used
in the name of the genus of rabies lyssavirus.[56]

[edit] Impact

This section requires expansion with:


currently the following web page [3].

Because of its potentially violent nature, rabies has been known since c.2000 B.C. [57] The first written
record of rabies is in the Codex of Eshnunna (ca. 1930 BC), which dictates that the owner of a dog

129
showing symptoms of rabies should take preventive measure against bites. If another person was bitten by
a rabid dog and later died, the owner was heavily fined.[58]

Rabies was considered a scourge for its prevalence in the 19th century. Fear of rabies related to methods
of transmissions was almost irrational;[56] however, this gave Louis Pasteur ample opportunity to test post-
exposure treatments from 1885.[59]

[edit] In other animals

This section requires expansion with:


information from the main article.

Main article: Rabies in animals

Rabies is infectious to mammals. Three stages of rabies are recognized in dogs and other animals. The
first stage is a one- to three-day period characterized by behavioral changes and is known as the
prodromal stage. The second stage is the excitative stage, which lasts three to four days. It is this stage
that is often known as furious rabies for the tendency of the affected dog to be hyperreactive to external
stimuli and bite at anything near. The third stage is the paralytic stage and is caused by damage to motor
neurons. Incoordination is seen owing to rear limb paralysis and drooling and difficulty swallowing is
caused by paralysis of facial and throat muscles. Death is usually caused by respiratory arrest.[60]

As recently as 2004, a new symptom of rabies has been observed in foxes. Probably at the beginning of
the prodromal stage, foxes, which are extremely cautious by nature, seem to lose this instinct. Foxes will
come into settlements, approach people, and generally behave as if tame. How long such "euphoria" lasts
is not known. But even in this state such animals are extremely dangerous, as their saliva and secretions
still contain the virus, and they remain very unpredictable.[61]

Tetanus

From Wikipedia, the free encyclopedia

Jump to: navigation, search

Tetanus

Classification and external resources

130
Muscular spasms (specifically opisthotonos) in a patient
suffering from tetanus. Painting by Sir Charles Bell,
1809.

ICD-10 A33.-A35.

ICD-9 037, 771.3

DiseasesDB 2829

MedlinePlus 000615

eMedicine emerg/574

MeSH D013742

Tetanus (from Ancient Greek: τέτανος tetanos "taut", and τείνειν teinein "to stretch")[1] is a medical
condition characterized by a prolonged contraction of skeletal muscle fibers. The primary symptoms are
caused by tetanospasmin, a neurotoxin produced by the Gram-positive, obligate anaerobic bacterium
Clostridium tetani. Infection generally occurs through wound contamination and often involves a cut or
deep puncture wound. As the infection progresses, muscle spasms develop in the jaw (thus the name
"lockjaw") and elsewhere in the body.[2] Infection can be prevented by proper immunization and by post-
exposure prophylaxis.[3]

Contents

[hide]

 1 Signs and symptoms


 2 Cause
 3 Pathophysiology

131
 4 Diagnosis
 5 Prevention
 6 Treatment
o 6.1 Mild tetanus
o 6.2 Severe tetanus
 7 Epidemiology
 8 History
 9 Notable victims
 10 See also
 11 References
 12 External links
o 12.1 Media

[edit] Signs and symptoms

Lock-jaw and risus sardonicus in a patient suffering from tetanus.

An infant suffering from neonatal tetanus.

132
Tetanus affects skeletal muscle, a type of striated muscle used in voluntary movement. The other type of
striated muscle, cardiac or heart muscle, cannot be tetanized because of its intrinsic electrical properties.
Mortality rates reported vary from 48% to 73%. In recent years, approximately 11% of reported tetanus
cases have been fatal. The highest mortality rates are in unvaccinated people and people over 60 years of
age.[3]

The incubation period of tetanus may be up to several months but is usually about 8 days. [4][5] In general,
the further the injury site is from the central nervous system, the longer the incubation period. The shorter
the incubation period, the more severe the symptoms.[6] In neonatal tetanus, symptoms usually appear
from 4 to 14 days after birth, averaging about 7 days. On the basis of clinical findings, four different
forms of tetanus have been described.[3]

Generalized tetanus is the most common type of tetanus, representing about 80% of cases. The
generalized form usually presents with a descending pattern. The first sign is trismus, or lockjaw, and the
facial spasms called risus sardonicus, followed by stiffness of the neck, difficulty in swallowing, and
rigidity of pectoral and calf muscles. Other symptoms include elevated temperature, sweating, elevated
blood pressure, and episodic rapid heart rate. Spasms may occur frequently and last for several minutes
with the body shaped into a characteristic form called opisthotonos. Spasms continue for up to 4 weeks,
and complete recovery may take months.

Neonatal tetanus is a form of generalized tetanus that occurs in newborns. Infants who have not acquired
passive immunity because the mother has never been immunized are at risk. It usually occurs through
infection of the unhealed umbilical stump, particularly when the stump is cut with a non-sterile
instrument. Neonatal tetanus is common in many developing countries and is responsible for about 14%
(215,000) of all neonatal deaths, but is very rare in developed countries.[7]

Local tetanus is an uncommon form of the disease, in which patients have persistent contraction of
muscles in the same anatomic area as the injury. The contractions may persist for many weeks before
gradually subsiding. Local tetanus is generally milder; only about 1% of cases are fatal, but it may
precede the onset of generalized tetanus.

Cephalic tetanus is a rare form of the disease, occasionally occurring with otitis media (ear infections) in
which C. tetani is present in the flora of the middle ear, or following injuries to the head. There is
involvement of the cranial nerves, especially in the facial area.

[edit] Cause

Tetanus is often associated with rust, especially rusty nails, but this concept is somewhat misleading.
Objects that accumulate rust are often found outdoors, or in places that harbor anaerobic bacteria, but the
rust itself does not cause tetanus nor does it contain more C. tetani bacteria. The rough surface of rusty
metal merely provides a prime habitat for a C. tetani endospore to reside, and the nail affords a means to
puncture skin and deliver endospore into the wound. An endospore is a non-metabolizing survival
structure that begins to metabolize and cause infection once in an adequate environment. Because
C. tetani is an anaerobic bacterium, it and its endospores survive well in an environment that lacks
oxygen. Hence, stepping on a nail (rusty or not) may result in a tetanus infection, as the low-oxygen
(anaerobic) environment is provided by the same object which causes a puncture wound, delivering
endospores to a suitable environment for growth.

[edit] Pathophysiology

133
Facial spasms called Risus Sardonicus-First Symptom of Generalized Tetanus.

Tetanus begins when spores of Clostridium tetani enter damaged tissue. The spores transform into rod-
shaped bacteria and produce the neurotoxin tetanospasmin (also known as tetanus toxin). This toxin is
inactive inside the bacteria, but when the bacteria die, toxin is released and activated by proteases. Active
tetanospasmin is carried by retrograde axonal transport[6][8] to the spinal cord and brain stem where it
binds irreversibly to receptors at these sites.[6] It cleaves membrane proteins involved in
neuroexocytosis,[9] which in turn blocks neurotransmission. Ultimately, this produces the symptoms of the
disease. Damaged upper motor neurons can no longer inhibit lower motor neurons (see Renshaw cells),
plus they cannot control reflex responses to afferent sensory stimuli.[6] Both mechanisms produce the
hallmark muscle rigidity and spasms. Similarly, a lack of neural control of the adrenal glands results in
release of catecholamines, thus producing a hypersympathetic state and widespread autonomic instability.

C. tetani also produces tetanolysin, another toxin whose role in tetanus is unknown.

[edit] Diagnosis

There are currently no blood tests that can be used to diagnose tetanus. The diagnosis is based on the
presentation of tetanus symptoms and does not depend upon isolation of the bacteria, which is recovered
from the wound in only 30% of cases and can be isolated from patients who do not have tetanus.
Laboratory identification of C. tetani can only be demonstrated by production of tetanospasmin in mice.[3]

The "spatula test" is a clinical test for tetanus that involves touching the posterior pharyngeal wall with a
sterile, soft-tipped instrument, and observing the effect. A positive test result is the involuntary
contraction of the jaw (biting down on the "spatula"), and a negative test result would normally be a gag
reflex attempting to expel the foreign object. A short report in The American Journal of Tropical
Medicine and Hygiene states that in a patient research study, the spatula test had a high specificity (zero
false-positive test results) and a high sensitivity (94% of infected patients produced a positive test
result).[10]

[edit] Prevention

134
Unlike many infectious diseases, recovery from naturally acquired tetanus does not usually result in
immunity to tetanus. This is due to the extreme potency of the tetanospasmin toxin; even a lethal dose of
tetanospasmin is insufficient to provoke an immune response.

Tetanus can be prevented by vaccination with tetanus toxoid.[11] The CDC recommends that adults receive
a booster vaccine every ten years,[12] and standard care practice in many places is to give the booster to
any patient with a puncture wound who is uncertain of when he or she was last vaccinated, or if he or she
has had fewer than three lifetime doses of the vaccine. The booster may not prevent a potentially fatal
case of tetanus from the current wound, however, as it can take up to two weeks for tetanus antibodies to
form.[13] In children under the age of seven, the tetanus vaccine is often administered as a combined
vaccine, DPT/DTaP vaccine, which also includes vaccines against diphtheria and pertussis. For adults and
children over seven, the Td vaccine (tetanus and diphtheria) or Tdap (tetanus, diphtheria, and acellular
pertussis) is commonly used.[11]

[edit] Treatment

The wound must be cleaned. Dead and infected tissue should be removed by surgical debridement.
Administration of the antibiotic metronidazole decreases the number of bacteria but has no effect on the
bacterial toxin. Penicillin was once used to treat tetanus, but is no longer the treatment of choice, owing to
a theoretical risk of increased spasms. However, its use is recommended if metronidazole is not available.
Passive immunization with human anti-tetanospasmin immunoglobulin or tetanus immunoglobulin is
crucial. If specific anti-tetanospasmin immunoglobulin is not available, then normal human
immunoglobulin may be given instead. All tetanus victims should be vaccinated against the disease or
offered a booster shot.

[edit] Mild tetanus

Mild cases of tetanus can be treated with:

 Tetanus immunoglobulin IV or IM,


 metronidazole IV for 10 days,
 Diazepam,
 tetanus vaccination

[edit] Severe tetanus

Severe cases will require admission to intensive care. In addition to the measures listed above for mild
tetanus:

 human tetanus immunoglobulin injected intrathecally (increases clinical improvement from 4% to


35%)
 tracheostomy and mechanical ventilation for 3 to 4 weeks,
 magnesium, as an intravenous (IV) infusion, to prevent muscle spasm,
 diazepam as a continuous IV infusion,
 the autonomic effects of tetanus can be difficult to manage (alternating hyper- and hypotension,
hyperpyrexia/hypothermia) and may require IV labetalol, magnesium, clonidine, or nifedipine.

Drugs such as diazepam or other muscle relaxants can be given to control the muscle spasms. In extreme
cases it may be necessary to paralyze the patient with curare-like drugs and use a mechanical ventilator.

135
In order to survive a tetanus infection, the maintenance of an airway and proper nutrition are required. An
intake of 3500-4000 calories, and at least 150 g of protein per day, is often given in liquid form through a
tube directly into the stomach (Percutaneous endoscopic gastrostomy), or through a drip into a vein (Total
parenteral nutrition). This high-caloric diet maintenance is required because of the increased metabolic
strain brought on by the increased muscle activity. Full recovery takes 4 to 6 weeks because the body
must regenerate destroyed nerve axon terminals.

[edit] Epidemiology

Disability-adjusted life year for tetanus per 100,000 inhabitants in 2004.

no data

≤10

10-25

25-50

50-75

75-100

100-125

125-150

150-200

200-250

250-500

500-750

≥750

136
Tetanus cases reported worldwide (1990-2004). Ranging from strongly prevalent (in dark red) to very few
cases (in light yellow) (grey, no data).

Tetanus is an international health problem, as C. tetani spores are ubiquitous. The disease occurs almost
exclusively in persons who are unvaccinated or inadequately immunized.[2] Tetanus occurs worldwide but
is more common in hot, damp climates with soil rich in organic matter. This is particularly true with
manure-treated soils, as the spores are widely distributed in the intestines and feces of many non-human
animals such as horses, sheep, cattle, dogs, cats, rats, guinea pigs, and chickens. Spores can be introduced
into the body through puncture wounds. In agricultural areas, a significant number of human adults may
harbor the organism. The spores can also be found on skin surfaces and in contaminated heroin.[3] Heroin
users, particularly those that inject the drug, appear to be at high risk for tetanus.

Tetanus – particularly the neonatal form – remains a significant public health problem in non-
industrialized countries. The World Health Organization estimates that 59,000 newborns worldwide died
in 2008 as a result of neonatal tetanus.[14] In the United States, 50-100 people become infected with
tetanus each year.[3] Nearly all of the cases in the United States occur in unimmunized individuals or
individuals who have allowed their inoculations to lapse.[3]

Tetanus is the only vaccine-preventable disease that is infectious but is not contagious.[3]

[edit] History

Tetanus was well known to ancient people who recognized the relationship between wounds and fatal
muscle spasms.[15] In 1884, Arthur Nicolaier isolated the strychnine-like toxin of tetanus from free-living,
anaerobic soil bacteria. The etiology of the disease was further elucidated in 1884 by Antonio Carle and
Giorgio Rattone, who demonstrated the transmissibility of tetanus for the first time. They produced
tetanus in rabbits by injecting pus from a patient with fatal tetanus into their sciatic nerves. In 1889, C.
tetani was isolated from a human victim by Kitasato Shibasaburō, who later showed that the organism
could produce disease when injected into animals, and that the toxin could be neutralized by specific
antibodies. In 1897, Edmond Nocard showed that tetanus antitoxin induced passive immunity in humans,
and could be used for prophylaxis and treatment. Tetanus toxoid vaccine was developed by P. Descombey
in 1924, and was widely used to prevent tetanus induced by battle wounds during World War II.[3]

[edit] Notable victims

 Tom Butler – English footballer; contracted after suffering a badly broken arm.
 George Hogg – English adventurer who rescued war orphans in China; died in 1945 from an
infection resulting from a foot injury.
 Joe Hill Louis – Memphis blues musician; died in 1957 as a result of an infected wound to his
thumb.
 George Montagu – English ornithologist; contracted tetanus when he stepped on a nail.
 Joe Powell – English footballer; contracted following amputation of a badly broken arm.
 John A. Roebling – Civil Engineer and Architect famous for his bridge designs, particularly the
Brooklyn Bridge; contracted tetanus following amputation of his foot due to an injury caused by a
ferry when it crashed into a wharf.
 George Crockett Strong – Union brigadier general in the American Civil War; from wounds
sustained in the assault against Fort Wagner on Morris Island, South Carolina.
 Fred Thomson – silent film actor; stepped on a nail.

137
 John Thoreau (brother of Henry David Thoreau); nicked himself with a razor while shaving.
 Johann Tserclaes, Count of Tilly; wounded by a cannon ball in the Battle of Rain.
 Traveller – General Robert E. Lee's favorite horse; stepped on a nail.

Tuberculosis

From Wikipedia, the free encyclopedia

Jump to: navigation, search

Tuberculosis

Classification and external resources

Chest X-ray of a patient with far-advanced tuberculosis

ICD-10 A15.–A19.

ICD-9 010–018

OMIM 607948

DiseasesDB 8515

MedlinePlus 000077 000624

eMedicine med/2324 emerg/618 radio/411

138
MeSH D014376

Tuberculosis or TB (short for tubercles bacillus) is a common and often deadly infectious disease caused
by various strains of mycobacteria, usually Mycobacterium tuberculosis in humans.[1] Tuberculosis
usually attacks the lungs but can also affect other parts of the body. It is spread through the air when
people who have the disease cough, sneeze, or spit.[2] Most infections in humans result in an
asymptomatic, latent infection, and about one in ten latent infections eventually progresses to active
disease, which, if left untreated, kills more than 50% of its victims.

The classic symptoms are a chronic cough with blood-tinged sputum, fever, night sweats, and weight loss
(the last giving rise to the formerly prevalent colloquial term "consumption"). Infection of other organs
causes a wide range of symptoms. Diagnosis relies on radiology (commonly chest X-rays), a tuberculin
skin test, blood tests, as well as microscopic examination and microbiological culture of bodily fluids.
Treatment is difficult and requires long courses of multiple antibiotics. Contacts are also screened and
treated if necessary. Antibiotic resistance is a growing problem in (extensively) multi-drug-resistant
tuberculosis. Prevention relies on screening programs and vaccination, usually with Bacillus Calmette-
Guérin vaccine.

One third of the world's population is thought to be infected with M. tuberculosis,[3][4] and new infections
occur at a rate of about one per second.[5] The proportion of people who become sick with tuberculosis
each year is stable or falling worldwide but, because of population growth, the absolute number of new
cases is still increasing.[5] In 2007 there were an estimated 13.7 million chronic active cases, 9.3 million
new cases, and 1.8 million deaths, mostly in developing countries.[6] In addition, more people in the
developed world are contracting tuberculosis because their immune systems are compromised by
immunosuppressive drugs, substance abuse, or AIDS. The distribution of tuberculosis is not uniform
across the globe; about 80% of the population in many Asian and African countries test positive in
tuberculin tests, while only 5-10% of the US population test positive.[1]

Contents

[hide]

 1 Signs and symptoms


 2 Causes
o 2.1 Risk factors
 3 Mechanism
o 3.1 Transmission
o 3.2 Pathogenesis
 4 Diagnosis
 5 Prevention
o 5.1 Vaccines
o 5.2 Screening
 6 Treatment
 7 Prognosis
 8 Epidemiology
 9 History
o 9.1 Other names

139
o 9.2 Folklore
o 9.3 Study and treatment
o 9.4 Age
 10 Society and culture
o 10.1 Public health
o 10.2 Notable victims
 11 Research
 12 In other animals
 13 References
 14 Further reading
 15 External links

[edit] Signs and symptoms

Main symptoms of variants and stages of tuberculosis,[7][8] with many symptoms overlapping with other
variants, while others are more (but not entirely) specific for certain variants. Multiple variants may be
present simultaneously.

Scanning electron micrograph of Mycobacterium tuberculosis

140
Phylogenetic tree of the genus Mycobacterium.

When the disease becomes active, 75% of the cases are pulmonary TB, that is, TB in the lungs.
Symptoms include chest pain, coughing up blood, and a productive, prolonged cough for more than three
weeks. Systemic symptoms include fever, chills, night sweats, appetite loss, weight loss, pallor, and often
a tendency to fatigue very easily.[5]

Tuberculosis also has a specific odour attached to it, this has led to trained animals being used to vet
samples as a method of early detection[9]

In the other 25% of active cases, the infection moves from the lungs, causing other kinds of TB,
collectively denoted extrapulmonary tuberculosis.[10] This occurs more commonly in immunosuppressed
persons and young children. Extrapulmonary infection sites include the pleura in tuberculosis pleurisy,
the central nervous system in meningitis, the lymphatic system in scrofula of the neck, the genitourinary
system in urogenital tuberculosis, and bones and joints in Pott's disease of the spine. An especially serious
form is disseminated TB, more commonly known as miliary tuberculosis. Extrapulmonary TB may co-
exist with pulmonary TB as well.[11]

[edit] Causes

Main article: Mycobacterium tuberculosis

The primary cause of TB, Mycobacterium tuberculosis, is a small aerobic non-motile bacillus. High lipid
content of this pathogen accounts for many of its unique clinical characteristics. [12] It divides every 16 to
20 hours, an extremely slow rate compared with other bacteria, which usually divide in less than an
hour.[13] (For example, one of the fastest-growing bacteria is a strain of E. coli that can divide roughly
every 20 minutes.) Since MTB has a cell wall but lacks a phospholipid outer membrane, it is classified as
a Gram-positive bacterium. However, if a Gram stain is performed, MTB either stains very weakly Gram-
positive or does not retain dye due to the high lipid & mycolic acid content of its cell wall.[14] MTB can
withstand weak disinfectants and survive in a dry state for weeks. In nature, the bacterium can grow only
within the cells of a host organism, but M. tuberculosis can be cultured in vitro.[15]

Using histological stains on expectorate samples from phlegm (also called sputum), scientists can identify
MTB under a regular microscope. Since MTB retains certain stains after being treated with acidic
solution, it is classified as an acid-fast bacillus (AFB).[1][14] The most common acid-fast staining
technique, the Ziehl-Neelsen stain, dyes AFBs a bright red that stands out clearly against a blue
background. Other ways to visualize AFBs include an auramine-rhodamine stain and fluorescent
microscopy.

141
The M. tuberculosis complex includes four other TB-causing mycobacteria: M. bovis, M. africanum, M.
canetti and M. microti.[16] M. africanum is not widespread, but in parts of Africa it is a significant cause of
tuberculosis.[17][18] M. bovis was once a common cause of tuberculosis, but the introduction of pasteurized
milk has largely eliminated this as a public health problem in developed countries.[1][19] M. canetti is rare
and seems to be limited to Africa, although a few cases have been seen in African emigrants. [20] M.
microti is mostly seen in immunodeficient people, although it is possible that the prevalence of this
pathogen has been underestimated.[21]

Other known pathogenic mycobacteria include Mycobacterium leprae, Mycobacterium marinum,


Mycobacterium avium and M. kansasii. The last two are part of the nontuberculous mycobacteria (NTM)
group. Nontuberculous mycobacteria cause neither TB nor leprosy, but they do cause pulmonary diseases
resembling TB.[22]

[edit] Risk factors

Persons with silicosis have an approximately 30-fold greater risk for developing TB.[23] Silica particles
irritate the respiratory system, causing immunogenic responses such as phagocytosis, which, as a
consequence, results in high lymphatic vessel deposits.[24] It is this interference and blockage of
macrophage function that increases the risk of tuberculosis.[25] Persons with chronic renal failure and also
on hemodialysis have an increased risk: 10–25 times greater than the general population. Persons with
diabetes mellitus have a risk for developing active TB that is two to four times greater than persons
without diabetes mellitus, and this risk is likely greater in persons with insulin-dependent or poorly
controlled diabetes. Other clinical conditions that have been associated with active TB include
gastrectomy with attendant weight loss and malabsorption, jejunoileal bypass, renal and cardiac
transplantation, carcinoma of the head or neck, and other neoplasms (e.g., lung cancer, lymphoma, and
leukemia).[26]

Given that silicosis greatly increases the risk of tuberculosis, more research about the effect of various
indoor or outdoor air pollutants on the disease would be necessary. Some possible indoor sources of silica
include paint, concrete and Portland cement. Crystalline silica is found in concrete, masonry, sandstone,
rock, paint, and other abrasives. The cutting, breaking, crushing, drilling, grinding, or abrasive blasting of
these materials may produce fine silica dust. It can also be in soil, mortar, plaster, and shingles. When you
wear dusty clothing at home or in your car, you may be carrying silica dust that your family will
breathe.[27]

Low body weight is associated with risk of tuberculosis as well. A body mass index (BMI) below 18.5
increases the risk by 2—3 times. On the other hand, an increase in body weight lowers the risk. [28][29]
Patients with diabetes mellitus are at increased risk of contracting tuberculosis,[30] and they have a poorer
response to treatment, possibly due to poorer drug absorption.[31]

Other conditions that increase risk include the sharing of needles among IV drug users; recent TB
infection or a history of inadequately treated TB; chest X-ray suggestive of previous TB, showing fibrotic
lesions and nodules; prolonged corticosteroid therapy and other immunosuppressive therapy;
Immunocompromised patients (30-40% of AIDS patients in the world also have TB) hematologic and
reticuloendothelial diseases, such as leukemia and Hodgkin's disease; end-stage kidney disease; intestinal
bypass; chronic malabsorption syndromes; vitamin D deficiency;[32] and low body weight.[1][11]

Twin studies in the 1940s showed that susceptibility to TB was heritable. If one of a pair of twins got TB,
then the other was more likely to get TB if he was identical than if he was not. [33] These findings were

142
more recently confirmed by a series of studies in South Africa.[34][35][36] Specific gene polymorphisms in
IL12B have been linked to tuberculosis susceptibility.[37]

Some drugs, including rheumatoid arthritis drugs that work by blocking tumor necrosis factor-alpha (an
inflammation-causing cytokine), raise the risk of activating a latent infection due to the importance of this
cytokine in the immune defense against TB.[38]

[edit] Mechanism

[edit] Transmission

When people suffering from active pulmonary TB cough, sneeze, speak, or spit, they expel infectious
aerosol droplets 0.5 to 5 µm in diameter. A single sneeze can release up to 40,000 droplets.[39] Each one of
these droplets may transmit the disease, since the infectious dose of tuberculosis is very low and inhaling
less than ten bacteria may cause an infection.[40][41]

People with prolonged, frequent, or intense contact are at particularly high risk of becoming infected, with
an estimated 22% infection rate. A person with active but untreated tuberculosis can infect 10–15 other
people per year.[5] Others at risk include people in areas where TB is common, people who inject drugs
using unsanitary needles, residents and employees of high-risk congregate settings, medically under-
served and low-income populations, high-risk racial or ethnic minority populations, children exposed to
adults in high-risk categories, patients immunocompromised by conditions such as HIV/AIDS, people
who take immunosuppressant drugs, and health care workers serving these high-risk clients.[42]

Transmission can only occur from people with active — not latent — TB.[1] The probability of
transmission from one person to another depends upon the number of infectious droplets expelled by a
carrier, the effectiveness of ventilation, the duration of exposure, and the virulence of the M. tuberculosis
strain.[11] The chain of transmission can, therefore, be broken by isolating patients with active disease and
starting effective anti-tuberculous therapy. After two weeks of such treatment, people with non-resistant
active TB generally cease to be contagious. If someone does become infected, then it will take at least 21
days, or three to four weeks, before the newly infected person can transmit the disease to others. [43] TB
can also be transmitted by eating meat infected with TB. Mycobacterium bovis causes TB in cattle. (See
details below.)

[edit] Pathogenesis

About 90% of those infected with Mycobacterium tuberculosis have asymptomatic, latent TB infection
(sometimes called LTBI), with only a 10% lifetime chance that a latent infection will progress to TB
disease.[1] However, if untreated, the death rate for these active TB cases is more than 50%.[44]

TB infection begins when the mycobacteria reach the pulmonary alveoli, where they invade and replicate
within the endosomes of alveolar macrophages.[1][45] The primary site of infection in the lungs is called the
Ghon focus, and is generally located in either the upper part of the lower lobe, or the lower part of the
upper lobe.[1] Bacteria are picked up by dendritic cells, which do not allow replication, although these
cells can transport the bacilli to local (mediastinal) lymph nodes. Further spread is through the
bloodstream to other tissues and organs where secondary TB lesions can develop in other parts of the lung
(particularly the apex of the upper lobes), peripheral lymph nodes, kidneys, brain, and bone.[1][46] All parts
of the body can be affected by the disease, though it rarely affects the heart, skeletal muscles, pancreas
and thyroid.[47]

143
Tuberculosis is classified as one of the granulomatous inflammatory conditions. Macrophages, T
lymphocytes, B lymphocytes and fibroblasts are among the cells that aggregate to form a granuloma, with
lymphocytes surrounding the infected macrophages. The granuloma functions not only to prevent
dissemination of the mycobacteria, but also provides a local environment for communication of cells of
the immune system. Within the granuloma, T lymphocytes secrete cytokines such as interferon gamma,
which activates macrophages to destroy the bacteria with which they are infected.[48] Cytotoxic T cells can
also directly kill infected cells, by secreting perforin and granulysin.[45]

Importantly, bacteria are not always eliminated within the granuloma, but can become dormant, resulting
in a latent infection.[1] Another feature of the granulomas of human tuberculosis is the development of
abnormal cell death, also called necrosis, in the center of tubercles. To the naked eye this has the texture
of soft white cheese and was termed caseous necrosis.[49]

If TB bacteria gain entry to the bloodstream from an area of damaged tissue they spread through the body
and set up many foci of infection, all appearing as tiny white tubercles in the tissues. This severe form of
TB disease is most common in infants and the elderly and is called miliary tuberculosis. Patients with this
disseminated TB have a fatality rate near 100% if untreated. However, If treated early, the fatality rate is
reduced to near 10%.[50]

In many patients the infection waxes and wanes. Tissue destruction and necrosis are balanced by healing
and fibrosis.[49] Affected tissue is replaced by scarring and cavities filled with cheese-like white necrotic
material. During active disease, some of these cavities are joined to the air passages bronchi and this
material can be coughed up. It contains living bacteria and can therefore pass on infection. Treatment with
appropriate antibiotics kills bacteria and allows healing to take place. Upon cure, affected areas are
eventually replaced by scar tissue.[49]

If untreated, infection with Mycobacterium tuberculosis can become lobar pneumonia.[51]

[edit] Diagnosis

Main article: Tuberculosis diagnosis

See also: Tuberculosis classification

144
Mycobacterium tuberculosis (stained red) in sputum

Tuberculosis is diagnosed definitively by identifying the causative organism (Mycobacterium


tuberculosis) in a clinical sample (for example, sputum or pus). When this is not possible, a probable -
although sometimes inconclusive[2] - diagnosis may be made using imaging (X-rays or scans) and/or a
tuberculin skin test (Mantoux test).

The main problem with tuberculosis diagnosis is the difficulty in culturing this slow-growing organism in
the laboratory (it may take 4 to 12 weeks for blood or sputum culture). A complete medical evaluation for
TB must include a medical history, a physical examination, a chest X-ray, microbiological smears, and
cultures. It may also include a tuberculin skin test, a serological test. The interpretation of the tuberculin
skin test depends upon the person's risk factors for infection and progression to TB disease, such as
exposure to other cases of TB or immunosuppression.[11]

Currently, latent infection is diagnosed in a non-immunized person by a tuberculin skin test, which yields
a delayed hypersensitivity type response to an extract made from M. tuberculosis.[1] Those immunized for
TB or with past-cleared infection will respond with delayed hypersensitivity parallel to those currently in
a state of infection, so the test must be used with caution, particularly with regard to persons from
countries where TB immunization is common.[52] Tuberculin tests have the disadvantage of producing
false negatives, especially when the patient is co-morbid with sarcoidosis, Hodgkins lymphoma,
malnutrition, or most notably active tuberculosis disease.[1] The newer interferon release assays (IGRAs)
overcome many of these problems. IGRAs are in vitro blood tests that are more specific than the skin test.
IGRAs detect the release of interferon gamma in response to mycobacterial proteins such as ESAT-6.[53]
These are not affected by immunization or environmental mycobacteria, so generate fewer false positive
results.[54] There is also evidence that the T-SPOT.TB IGRA is more sensitive than the skin test.[55]

New TB tests have been developed that are fast and accurate. These include polymerase chain reaction
assays for the detection of bacterial DNA.[56] One such molecular diagnostics text gives results in 100
minutes and is being currently offered to 116 low and middle-income countries at a discount with support
from WHO and the Bill and Melinda Gates foundation.[57]

[edit] Prevention

Map showing the 22 high-burden countries (HBC) that according to WHO account for 80% of all new TB
cases arising each year. The Global Plan is especially aimed at these countries.

TB prevention and control takes two parallel approaches. In the first, people with TB and their contacts
are identified and then treated. Identification of infections often involves testing high-risk groups for TB.

145
In the second approach, children are vaccinated to protect them from TB. No vaccine is available that
provides reliable protection for adults. However, in tropical areas where the levels of other species of
mycobacteria are high, exposure to nontuberculous mycobacteria gives some protection against TB.[58]

The World Health Organization (WHO) declared TB a global health emergency in 1993, and the Stop TB
Partnership developed a Global Plan to Stop Tuberculosis that aims to save 14 million lives between 2006
and 2015.[59] Since humans are the only host of Mycobacterium tuberculosis, eradication would be
possible. This goal would be helped greatly by an effective vaccine.[60]

[edit] Vaccines

Many countries use Bacillus Calmette-Guérin (BCG) vaccine as part of their TB control programmes,
especially for infants. According to the WHO, this is the most often used vaccine worldwide, with 85% of
infants in 172 countries immunized in 1993.[61] One country that notably does not widely administer BCG
is the United States, where TB is rather uncommon.[62] BCG was the first vaccine for TB and developed at
the Pasteur Institute in France between 1905 and 1921.[63] However, mass vaccination with BCG did not
start until after World War II.[64] The protective efficacy of BCG for preventing serious forms of TB (e.g.
meningitis) in children is greater than 80%; its protective efficacy for preventing pulmonary TB in
adolescents and adults is variable, ranging from 0 to 80%.[65]

In South Africa, the country with the highest prevalence of TB, BCG is given to all children under age
three.[66] However, BCG is less effective in areas where mycobacteria are less prevalent; therefore BCG is
not given to the entire population in these countries. In the USA, for example, BCG vaccine is not
recommended except for people who meet specific criteria:[11]

 Infants or children with negative skin test results who are continually exposed to untreated or
ineffectively treated patients or will be continually exposed to multidrug-resistant TB.
 Healthcare workers considered on an individual basis in settings in which a high percentage of
MDR-TB patients has been found, transmission of MDR-TB is likely, and TB control precautions
have been implemented and were not successful.

BCG provides some protection against severe forms of pediatric TB, but has been shown to be unreliable
against adult pulmonary TB, which accounts for most of the disease burden worldwide. Currently, there
are more cases of TB on the planet than at any other time in history and most agree there is an urgent need
for a newer, more effective vaccine that would prevent all forms of TB—including drug resistant
strains—in all age groups and among people with HIV.[67]

Several new vaccines to prevent TB infection are being developed. The first recombinant tuberculosis
vaccine rBCG30, entered clinical trials in the United States in 2004, sponsored by the National Institute of
Allergy and Infectious Diseases (NIAID).[68] A 2005 study showed that a DNA TB vaccine given with
conventional chemotherapy can accelerate the disappearance of bacteria as well as protect against re-
infection in mice; it may take four to five years to be available in humans.[69] A very promising TB
vaccine, MVA85A, is currently in phase II trials in South Africa by a group led by Oxford University,[70]
and is based on a genetically modified vaccinia virus. Many other strategies are also being used to
develop novel vaccines,[71] including both subunit vaccines (fusion molecules composed of two
recombinant proteins delivered in an adjuvant) such as Hybrid-1, HyVac4 or M72, and recombinant
adenoviruses such as Ad35.[72][73][74][75] Some of these vaccines can be effectively administered without
needles, making them preferable for areas where HIV is very common.[76] All of these vaccines have been
successfully tested in humans and are now in extended testing in TB-endemic regions. To encourage

146
further discovery, researchers and policymakers are promoting new economic models of vaccine
development including prizes, tax incentives and advance market commitments.[77][78]

[edit] Screening

Mantoux tuberculin skin test

Mantoux tuberculin skin tests are often used for routine screening of high risk individuals.[79]

Interferon-γ release assays are blood tests used in the diagnosis of some infectious diseases. There are
currently two interferon-γ release assays available for the diagnosis of tuberculosis:

 QuantiFERON-TB Gold (licensed in US, Europe and Japan); and


 T-SPOT.TB, a form of ELISPOT (licensed in Europe).

Chest photofluorography has been used in the past for mass screening for tuberculosis.

[edit] Treatment

Main article: Tuberculosis treatment

Treatment for TB uses antibiotics to kill the bacteria. Effective TB treatment is difficult, due to the
unusual structure and chemical composition of the mycobacterial cell wall, which makes many antibiotics
ineffective and hinders the entry of drugs.[80][81][82][83] The two antibiotics most commonly used are
rifampicin and isoniazid. However, instead of the short course of antibiotics typically used to cure other
bacterial infections, TB requires much longer periods of treatment (around 6 to 24 months) to entirely
eliminate mycobacteria from the body.[11] Latent TB treatment usually uses a single antibiotic, while
active TB disease is best treated with combinations of several antibiotics, to reduce the risk of the bacteria
developing antibiotic resistance.[84] People with latent infections are treated to prevent them from
progressing to active TB disease later in life.

Drug-resistant tuberculosis is transmitted in the same way as regular TB. Primary resistance occurs in
persons infected with a resistant strain of TB. A patient with fully susceptible TB develops secondary
resistance (acquired resistance) during TB therapy because of inadequate treatment, not taking the
prescribed regimen appropriately, or using low-quality medication.[84] Drug-resistant TB is a public health
issue in many developing countries, as treatment is longer and requires more expensive drugs. Multi-
drug-resistant tuberculosis (MDR-TB) is defined as resistance to the two most effective first-line TB

147
drugs: rifampicin and isoniazid. Extensively drug-resistant TB (XDR-TB) is also resistant to three or
more of the six classes of second-line drugs.[85]

The DOTS (Directly Observed Treatment Short-course) strategy of tuberculosis treatment recommended
by WHO was based on clinical trials done in the 1970s by Tuberculosis Research Centre, Chennai, India.
The country in which a person with TB lives can determine what treatment they receive. This is because
multidrug-resistant tuberculosis is resistant to most first-line medications, the use of second-line
antituberculosis medications is necessary to cure the patient. However, the price of these medications is
high; thus poor people in the developing world have no or limited access to these treatments.[86]

[edit] Prognosis

Progression from TB infection to TB disease occurs when the TB bacilli overcome the immune system
defenses and begin to multiply. In primary TB disease—1–5% of cases—this occurs soon after
infection.[1] However, in the majority of cases, a latent infection occurs that has no obvious symptoms.[1]
These dormant bacilli can produce tuberculosis in 2–23% of these latent cases, often many years after
infection.[87] The risk of reactivation increases with immunosuppression, such as that caused by infection
with HIV. In patients co-infected with M. tuberculosis and HIV, the risk of reactivation increases to 10%
per year.[1][44]

Studies utilizing DNA fingerprinting of M. tuberculosis strains have shown that reinfection contributes
more substantially to recurrent TB than previously thought,[88] with between 12% and 77% of cases
attributable to reinfection (instead of reactivation).[89]

[edit] Epidemiology

Roughly a third of the world's population has been infected with M. tuberculosis, and new infections
occur at a rate of one per second.[5] However, not all infections with M. tuberculosis cause TB disease and
many infections are asymptomatic.[93] In 2007, an estimated 13.7 million people had active TB disease,
with 9.3 million new cases and 1.8 million deaths; the annual incidence rate varied from 363 per 100,000
in Africa to 32 per 100,000 in the Americas.[6] Tuberculosis is the world's greatest infectious killer of
women of reproductive age and the leading cause of death among people with HIV/AIDS.[94]

The rise in HIV infections and the neglect of TB control programs have enabled a resurgence of
tuberculosis.[95] The emergence of drug-resistant strains has also contributed to this new epidemic with,
from 2000 to 2004, 20% of TB cases being resistant to standard treatments and 2% resistant to second-
line drugs.[85] The rate at which new TB cases occur varies widely, even in neighboring countries,
apparently because of differences in health care systems.[96]

In 2007, the country with the highest estimated incidence rate of TB was Swaziland, with 1200 cases per
100,000 people. India had the largest total incidence, with an estimated 2.0 million new cases. [6] The
Philippines ranks fourth in the world for the number of cases of tuberculosis and has the highest number
of cases per head in Southeast Asia. Almost two thirds of Filipinos have tuberculosis, and up to an
additional five million people are infected yearly.[97] In developed countries, tuberculosis is less common
and is mainly an urban disease. In the United Kingdom, the national average was 15 per 100,000 in 2007,
and the highest incidence rates in Western Europe were 30 per 100,000 in Portugal and Spain. These rates
compared with 98 per 100,000 in China and 48 per 100,000 in Brazil. In the United States, the overall
tuberculosis case rate was 4 per 100,000 persons in 2007.[91] In Canada tuberculosis is still endemic in
some rural areas.[98]

148
The incidence of TB varies with age. In Africa, TB primarily affects adolescents and young adults. [99]
However, in countries where TB has gone from high to low incidence, such as the United States, TB is
mainly a disease of older people, or of the immunocompromised.[1][100]

There are a number of known factors that make people more susceptible to TB infection: worldwide the
most important of these is HIV. Co-infection with HIV is a particular problem in Sub-Saharan Africa, due
to the high incidence of HIV in these countries.[101][102] Smoking more than 20 cigarettes a day also
increases the risk of TB by two to four times.[103][104] Diabetes mellitus is also an important risk factor that
is growing in importance in developing countries.[105] Other disease states that increase the risk of
developing tuberculosis are Hodgkin lymphoma, end-stage renal disease, chronic lung disease,
malnutrition, and alcoholism.[1]

Diet may also modulate risk. For example, among immigrants in London from the Indian subcontinent,
vegetarian Hindu Asians were found to have an 8.5 fold increased risk of tuberculosis, compared to
Muslims who ate meat and fish daily.[106] Although a causal link is not proved by this data,[107] this
increased risk could be caused by micronutrient deficiencies: possibly iron, vitamin B12 or vitamin D.[106]
Further studies have provided more evidence of a link between vitamin D deficiency and an increased risk
of contracting tuberculosis.[108][109] Globally, the severe malnutrition common in parts of the developing
world causes a large increase in the risk of developing active tuberculosis, due to its damaging effects on
the immune system.[110][111] Along with overcrowding, poor nutrition may contribute to the strong link
observed between tuberculosis and poverty.[112][113]

Prisoners, especially in poor countries, are particularly vulnerable to infectious diseases such as
HIV/AIDS and TB. Prisons provide a conditions that allow TB to spread rapidly, due to overcrowding,
poor nutrition and a lack of health services. Since the early 1990s, TB outbreaks have been reported in
prisons in many countries in Eastern Europe. The prevalence of TB in prisons is much higher than among
the general population – in some countries as much as 40 times higher.[114][115]

[edit] History

Main article: History of tuberculosis

Tubercular decay has been found in the spines of Egyptian mummies. Pictured: Egyptian mummy in the
British Museum

Tuberculosis has been present in humans since antiquity. The earliest unambiguous detection of
Mycobacterium tuberculosis is in the remains of bison dated 18,000 years before the present.[116] Whether

149
tuberculosis originated in cattle and then transferred to humans, or diverged from a common ancestor
infecting a different species, is currently unclear.[117] However, it is clear that M. tuberculosis is not
directly descended from M. bovis, which seems to have evolved relatively recently.[118]

Skeletal remains from a Neolithic Settlement in the Eastern Mediterranean show prehistoric humans
(7000 BC) had TB,[119] and tubercular decay has been found in the spines of mummies from 3000–2400
BC.[120] Phthisis is a Greek term for tuberculosis; around 460 BC, Hippocrates identified phthisis as the
most widespread disease of the times involving coughing up blood and fever, which was almost always
fatal.[121] In South America, the earliest evidence of tuberculosis is associated with the Paracas-Caverna
culture (circa 750 BC to circa 100 AD).[122][123] Skeletal remains from prehistoric North America indicate
that the disease was so common that "virtually every member of these late prehistoric communities had
primary exposure to tuberculosis."[124]

[edit] Other names

In the past, tuberculosis has been called consumption, because it seemed to consume people from within,
with a bloody cough, fever, pallor, and long relentless wasting. Other names included phthisis (Greek for
consumption) and phthisis pulmonalis; scrofula (in adults), affecting the lymphatic system and resulting
in swollen neck glands; tabes mesenterica, TB of the abdomen and lupus vulgaris, TB of the skin;
wasting disease; white plague, because sufferers appear markedly pale; king's evil, because it was
believed that a king's touch would heal scrofula; and Pott's disease, or gibbus of the spine and
joints.[125][126]

Dr. Robert Koch discovered the tuberculosis bacillus.

Miliary tuberculosis—now commonly known as disseminated TB—occurs when the infection invades the
circulatory system, resulting in millet-like seeding of TB bacilli in the lungs as seen on an X-ray.[125][127]
TB is also called Koch's disease, after the scientist Robert Koch.[128]

[edit] Folklore

Before the Industrial Revolution, tuberculosis may sometimes have been[clarification needed]regarded as
vampirism. When one member of a family died from it, the other members that were infected would lose

150
their health slowly. People[clarification needed] believed that this was caused by the original victim draining the
life from the other family members. Furthermore, people who had TB exhibited symptoms similar to what
people considered to be vampire traits. People with TB often have symptoms such as red, swollen eyes
(which also creates a sensitivity to bright light), pale skin, extremely low body heat, a weak heart and
coughing blood, suggesting the idea that the only way for the afflicted to replenish this loss of blood was
by sucking blood.[129] Another folk belief told that the affected individual was being forced, nightly, to
attend fairy revels, so that the victim wasted away owing to lack of rest; this belief was most common
when a strong connection was seen between the fairies and the dead.[130] Similarly, but less commonly, it
was attributed to the victims being "hagridden"—being transformed into horses by witches (hags) to
travel to their nightly meetings, again resulting in a lack of rest.[130]

TB was romanticized in the nineteenth century. Many people believed TB produced feelings of euphoria
referred to as Spes phthisica ("hope of the consumptive"). It was believed that TB sufferers who were
artists had bursts of creativity as the disease progressed. It was also believed that TB sufferers acquired a
final burst of energy just before they died that made women more beautiful and men more creative. [131][132]
In the early 20th century, some[clarification needed] believed TB to be caused by masturbation.[133]

[edit] Study and treatment

The study of tuberculosis, sometimes known as phthisiatry, dates back to The Canon of Medicine written
by Ibn Sina (Avicenna) in the 1020s. He was the first physician to identify pulmonary tuberculosis as a
contagious disease, the first to recognise the association with diabetes, and the first to suggest that it could
spread through contact with soil and water.[134][135][unreliable source?] Avicenna adopted, from the Greeks, the
theory that epidemics are caused by pollution in the air (miasma, a noxious form of "bad air").[136] He
developed the method of quarantine in order to limit the spread of tuberculosis.[137][unreliable source?] In ancient
times, treatments focused on sufferers' diets. Pliny the Elder described several methods in his Natural
History: "wolf's liver taken in thin wine, the lard of a sow that has been fed upon grass, or the flesh of a
she-ass taken in broth".[138]

Although it was established that the pulmonary form was associated with "tubercles" by Dr Richard
Morton in 1689,[139][140] due to the variety of its symptoms, TB was not identified as a single disease until
the 1820s and was not named "tuberculosis" until 1839 by J. L. Schönlein.[141] During the years 1838 –
1845, Dr. John Croghan, the owner of Mammoth Cave, brought a number of tuberculosis sufferers into
the cave in the hope of curing the disease with the constant temperature and purity of the cave air; they
died within a year.[142] The first TB sanatorium opened in 1854 in Görbersdorf, Germany (today
Sokołowsko, Poland) by Hermann Brehmer.[143]

The bacillus causing tuberculosis, Mycobacterium tuberculosis, was identified and described on 24 March
1882 by Robert Koch. He received the Nobel Prize in physiology or medicine in 1905 for this
discovery.[144] Koch did not believe that bovine (cattle) and human tuberculosis were similar, which
delayed the recognition of infected milk as a source of infection. Later, this source was eliminated by the
pasteurization process. Koch announced a glycerine extract of the tubercle bacilli as a remedy for
tuberculosis in 1890, calling it "tuberculin". It was not effective, but was later adapted as a test for pre-
symptomatic tuberculosis.[145]

The first genuine success in immunizing against tuberculosis was developed from attenuated bovine-
strain tuberculosis by Albert Calmette and Camille Guérin in 1906. It was called "BCG" (Bacillus of
Calmette and Guérin). The BCG vaccine was first used on humans in 1921 in France,[63] but it was not
until after World War II that BCG received widespread acceptance in the USA, Great Britain, and
Germany.[64]

151
Tuberculosis, or "consumption" as it was commonly known, caused the most widespread public concern
in the 19th and early 20th centuries as an endemic disease of the urban poor.[146] In 1815, one in four
deaths in England was of consumption; by 1918 one in six deaths in France were still caused by TB. In
the 20th century, tuberculosis killed an estimated 100 million people.[147] After the establishment in the
1880s that the disease was contagious, TB was made a notifiable disease in Britain; there were campaigns
to stop spitting in public places, and the infected poor were pressured to enter sanatoria that resembled
prisons; the sanatoria for the middle and upper classes offered excellent care and constant medical
attention.[143] Whatever the purported benefits of the fresh air and labor in the sanatoria, even under the
best conditions, 50% of those who entered were dead within five years (1916).[143]

Public health campaigns tried to halt the spread of TB

The promotion of Christmas Seals began in Denmark during 1904 as a way to raise money for
tuberculosis programs. It expanded to the United States and Canada in 1907 – 1908 to help the National
Tuberculosis Association (later called the American Lung Association).

In the United States, concern about the spread of tuberculosis played a role in the movement to prohibit
public spitting except into spittoons.

In Europe, deaths from TB fell from 500 out of 100,000 in 1850 to 50 out of 100,000 by 1950.
Improvements in public health were reducing tuberculosis even before the arrival of antibiotics. The
disease remained such a significant threat to public health, that when the Medical Research Council was
formed in Britain in 1913, its initial focus was tuberculosis research.[148]

It was not until 1946 with the development of the antibiotic streptomycin that effective treatment and cure
became possible. Prior to the introduction of this drug, the only treatment besides sanatoria were surgical
interventions, including bronchoscopy and suction as well as the pneumothorax or plombage technique —
collapsing an infected lung to "rest" it and allow lesions to heal — a technique that was of little benefit
and was mostly discontinued by the 1950s.[149] The emergence of multidrug-resistant TB has again
introduced surgery as part of the treatment for these infections. Here, surgical removal of chest cavities
will reduce the number of bacteria in the lungs, as well as increasing the exposure of the remaining

152
bacteria to drugs in the bloodstream. It is therefore thought to increase the effectiveness of the
chemotherapy.[150]

Hopes that the disease could be completely eliminated have been dashed since the rise of drug-resistant
strains in the 1980s. For example, tuberculosis cases in Britain, numbering around 117,000 in 1913, had
fallen to around 5,000 in 1987, but cases rose again, reaching 6,300 in 2000 and 7,600 cases in 2005. [151]
Due to the elimination of public health facilities in New York and the emergence of HIV, there was a
resurgence of TB in the late 1980s.[152] The number of patients failing to complete their course of drugs is
high. New York had to cope with more than 20,000 TB patients with multidrug-resistant strains (resistant
to, at least, both Rifampin and Isoniazid).

The resurgence of tuberculosis resulted in the declaration of a global health emergency by the World
Health Organization (WHO) in 1993.[153] Every year, nearly half a million new cases of multidrug-
resistant tuberculosis (MDR-TB) are estimated to occur worldwide.[154]

[edit] Age

Tuberculosis has been around for millenium. The oldest known human remains showing signs of
tuberculosis infection are 9,000 years old.[155] During this period, M. tuberculosis has lost numerous
coding and non-coding regions in its genome, losses that can be used to distinguish between strains of the
bacteria. The implication is that M. tuberculosis strains differ geographically, so their genetic differences
can be used to track the origins and movement of each strain.[156]

A new species has recently been discovered for the first time in 20 years.[157][158]

[edit] Society and culture

See also: Tuberculosis in popular culture

Through its affecting important historical figures, tuberculosis has influenced particularly European
history, and become a theme in art – mostly literature, music, and film.

[edit] Public health

Tuberculosis is one of the three primary diseases of poverty along with AIDS and malaria.[159] The Global
Fund to Fight AIDS, Tuberculosis and Malaria was started in 2002 to raise finances to address these
infectious diseases. Globalization has led to increased opportunities for disease spread. A tuberculosis
scare occurred in 2007 when Andrew Speaker flew on a transatlantic flight infected with multi-drug-
resistant tuberculosis.[160]

In the United States, the National Center for HIV, STD, and TB Prevention, as part of the Centers for
Disease Control and Prevention (CDC), is responsible for public health surveillance and prevention
research.

[edit] Notable victims

Main article: List of tuberculosis victims

[edit] Research

153
The Mycobacterium Tuberculosis Structural Genomics Consortium is a global consortium of scientists
conducting research regarding the diagnosis and treatment of tuberculosis. They are attempting to
determine the 3-dimensional structures of proteins from M. Tuberculosis.[citation needed]

[edit] In other animals

Main article: Mycobacterium bovis

Tuberculosis can be carried by mammals; domesticated species, such as cats and dogs, are generally free
of tuberculosis, but wild animals may be carriers.

Mycobacterium bovis causes TB in cattle. An effort to eradicate bovine tuberculosis from the cattle and
deer herds of New Zealand is underway. It has been found that herd infection is more likely in areas
where infected natural reservoir such as Australian brush-tailed possums come into contact with domestic
livestock at farm/bush borders.[161] Controlling the vectors through possum eradication and monitoring the
level of disease in livestock herds through regular surveillance are seen as a "two-pronged" approach to
ridding New Zealand of the disease.

In Ireland and the United Kingdom, badgers have been identified as one vector species for the
transmission of bovine tuberculosis. As a result, governments have come under pressure from some
quarters, primarily dairy farmers, to mount an active campaign of eradication of badgers in certain areas
with the purpose of reducing the incidence of bovine TB. The effectiveness of culling on the incidence of
TB in cattle is a contentious issue, with proponents and opponents citing their own studies to support their
position.[162][163][164] For instance, a study by an Independent Study Group on badger culling reported on 18
June 2007 that it was unlikely to be effective and would only make a “modest difference” to the spread of
TB and that "badger culling cannot meaningfully contribute to the future control of cattle TB"; in
contrast, another report concluded that this policy would have a significant impact.[165] On 4 July 2008,
the UK government decided against a proposed random culling policy.

154
Agriculture in India

From Wikipedia, the free encyclopedia


Jump to: navigation, search

Minor crop Areas in India: P Pulses, S Sugarcane, J Jute, Cn Coconut, C Cotton, and T Tea.

The fertile Ganges River Delta—known for severe flooding and tropical cyclones—supports cultivation
of jute, tea, and rice. Fisheries are both produced and exported from this region.

Agriculture in India has a long history, dating back to ten thousand years.

Today, India ranks second worldwide in farm output. Agriculture and allied sectors like forestry and
logging accounted for 16.6% of the GDP in 2007, employed 52% of the total workforce [1] and despite a
steady decline of its share in the GDP, is still the largest economic sector and plays a significant role in
the overall socio-economic development of India.

India is the largest producer in the world of fresh fruit, anise, fennel, badian, coriander, tropical fresh
fruit, jute, pigeon peas, pulses, spices, millets, castor oil seed, sesame seeds, safflower seeds, lemons,
limes, cow's milk, dry chillies and peppers, chick peas, cashew nuts, okra, ginger, turmeric guavas,
mangoes, goat milk and buffalo milk and meat.[2][3] Coffee.[4] It also has the world's largest cattle
population (281 million).[5] It is the second largest producer of cashews, cabbages, cotton seed and lint,

155
fresh vegetables, garlic, egg plant, goat meat, silk, nutmeg. mace, cardamom, onions, wheat, rice,
sugarcane, lentil, dry beans, groundnut, tea, green peas, cauliflowers, potatoes, pumpkins, squashes,
gourds and inland fish.[2][6] It is the third largest producer of tobacco, sorghum, rapeseed, coconuts, hen's
eggs and tomatoes.[2][6] India accounts for 10% of the world fruit production with first rank in the
production of mangoes, papaya, banana and sapota.[6]

India's population is growing faster than its ability to produce rice and wheat.[7]

edit] Initiatives

The required level of investment for the development of marketing, storage and cold storage
infrastructure is estimated to be huge. The government has not been able to implement various schemes to
raise investment in marketing infrastructure. Among these schemes are Construction of Rural Go downs,
Market Research and Information Network, and Development / Strengthening of Agricultural Marketing
Infrastructure, Grading and Standardization.[8]

The Indian Agricultural Research Institute (IARI), established in 1905, was responsible for the research
leading to the "Indian Green Revolution" of the 1970s. The Indian Council of Agricultural Research
(ICAR) is the apex body in agriculture and related allied fields, including research and education. [9] The
Union Minister of Agriculture is the President of the ICAR. The Indian Agricultural Statistics Research
Institute develops new techniques for the design of agricultural experiments, analyses data in agriculture,
and specializes in statistical techniques for animal and plant breeding.

Recently Government of India has set up Farmers Commission to completely evaluate the agriculture
program.[10] However the recommendations have had a mixed reception.

[edit] Mixed Farming

In August 2001 India's Parliament passed the Plant Variety Protection and Farmers' Rights Act, a sui
generis legislation. Being a WTO member, India had to comply with TRIPS and include PVP.
However, farmers' rights are of particular importance in India and thus the Act also allows for farmers
to save, sow and sell seeds as they always have, even if it is of a protected variety. This not only saves
the livelihoods of many farmers, it also provides an environment for the continuing development and
use of landraces, says Suman Sahai. The way it always was

[edit] Problems

156
Cotton flower in India. This is the main cash crop in Vidarbha region.
Slow agricultural growth is a concern for policymakers as some two-thirds of India’s people depend on
rural employment for a living. Current agricultural practices are neither economically nor
environmentally sustainable and India's yields for many agricultural commodities are low. Poorly
maintained irrigation systems and almost universal lack of good extension services are among the factors
responsible. Farmers' access to markets is hampered by poor roads, rudimentary market infrastructure,
and excessive regulation.
—World Bank: "India Country Overview 2008"[11]

The low productivity in India is a result of the following factors:

 According to World Bank, Indian Branch: Priorities for Agriculture and Rural Development",
India's large agricultural subsidies are hampering productivity-enhancing investment.
Overregulation of agriculture has increased costs, price risks and uncertainty. Government
intervenes in labour, land, and credit markets. India has inadequate infrastructure and services. [12]
World Bank also says that the allocation of water is inefficient, unsustainable and inequitable.
The irrigation infrastructure is deteriorating.[12] The overuse of water is currently being covered
by over pumping aquifers, but as these are falling by foot of groundwater each year, this is a
limited resource.[13]
 Illiteracy, general socio-economic backwardness, slow progress in implementing land reforms
and inadequate or inefficient finance and marketing services for farm produce.
 Inconsistent government policy. Agricultural subsidies and taxes often changed without notice for
short term political ends.
 The average size of land holdings is very small (less than 20,000 m²) and is subject to
fragmentation due to land ceiling acts, and in some cases, family disputes. Such small holdings
are often over-manned, resulting in disguised unemployment and low productivity of labour.
 Adoption of modern agricultural practices and use of technology is inadequate, hampered by
ignorance of such practices, high costs and impracticality in the case of small land holdings.
 Irrigation facilities are inadequate, as revealed by the fact that only 52.6% of the land was
irrigated in 2003–04,[14] which result in farmers still being dependent on rainfall, specifically the
Monsoon season. A good monsoon results in a robust growth for the economy as a whole, while a
poor monsoon leads to a sluggish growth.[15] Farm credit is regulated by NABARD, which is the
statutory apex agent for rural development in the subcontinent. At the same time overpumping
made possible by subsidized electric power is leading to an alarming drop in aquifer
levels.[16][17][18]

[edit] History

Main article: History of agriculture in India

157
Indian agriculture began by 9000 BC as a result of early cultivation of plants, and domestication of crops
and animals.[19] Settled life soon followed with implements and techniques being developed for
agriculture.[20][21] Double monsoons led to two harvests being reaped in one year.[22] Indian products soon
reached the world via existing trading networks and foreign crops were introduced to India. [22][23] Plants
and animals—considered essential to their survival by the Indians—came to be worshiped and
venerated.[24]

The middle ages saw irrigation channels reach a new level of sophistication in India and Indian crops
affecting the economies of other regions of the world under Islamic patronage.[25][26] Land and water
management systems were developed with an aim of providing uniform growth. [27][28] Despite some
stagnation during the later modern era the independent Republic of India was able to develop a
comprehensive agricultural program.

Agriculture

Agriculture is the artificial cultivation and processing of animals, plants, fungi and other life forms for
food, fibers and other byproducts.[1] Agriculture was the key implement in the rise of sedentary human
civilization, whereby farming of domesticated species created food surpluses that nurtured the
development of much denser and more stratified societies. The study of agriculture is known as
agricultural science. Agriculture is also observed in certain species of ant and termite.[2][3]

Farming generally relies on techniques to expand and maintain the lands suitable for raising domesticated
species. For plants, this usually requires some form of irrigation, although there are methods of dryland
farming. Cultivation of crops on arable land and the pastoral herding of livestock on rangeland remain at
the foundation of agriculture. In the past century there has been increasing concern to identify and
quantify various forms of agriculture. In the developed world, the range usually extends between
sustainable agriculture (e.g. permaculture or organic agriculture) and intensive farming (e.g. industrial
agriculture).

Modern agronomy, plant breeding, pesticides and fertilizers, and technological improvements have
sharply increased yields from cultivation, but at the same time have caused widespread ecological damage
and negative human health effects.[4] Selective breeding and modern practices in animal husbandry such
as intensive pig farming (and similar practices applied to the chicken) have similarly increased the output
of meat, but have raised concerns about animal cruelty and the health effects of the antibiotics, growth
hormones, and other chemicals commonly used in industrial meat production.[5]

The major agricultural products can be broadly grouped into foods, fibers, fuels, and raw materials. In the
21st century, plants have been used to grow biofuels, biopharmaceuticals, bioplastics,[6] and
pharmaceuticals.[7] Specific foods include cereals, vegetables, fruits, and meat. Fibers include cotton,
wool, hemp, silk and flax. Raw materials include lumber and bamboo. Other useful materials are
produced by plants, such as resins. Biofuels include methane from biomass, ethanol, and biodiesel. Cut
flowers, nursery plants, tropical fish and birds for the pet trade are some of the ornamental products.

In 2007, one third of the world's workers were employed in agriculture. The services sector has overtaken
agriculture as the economic sector employing the most people worldwide.[8] Despite the size of its
workforce, agricultural production accounts for less than five percent of the gross world product (an
aggregate of all gross domestic products).

158
o

[edit] Etymology

The word agriculture is the English adaptation of Latin agricultūra, from ager, "a field",[9] and cultūra,
"cultivation" in the strict sense of "tillage of the soil".[10] Thus, a literal reading of the word yields "tillage
of a field / of fields".

[edit] Overview

The percent of the human population working in agriculture has decreased over time.

Agriculture has played a key role in the development of human civilization. Until the Industrial
Revolution, the vast majority of the human population labored in agriculture. Development of agricultural
techniques has steadily increased agricultural productivity, and the widespread diffusion of these
techniques during a time period is often called an agricultural revolution. A remarkable shift in
agricultural practices has occurred over the past century in response to new technologies. In particular, the
Haber-Bosch method for synthesizing ammonium nitrate made the traditional practice of recycling
nutrients with crop rotation and animal manure less necessary.

Synthetic nitrogen, along with mined rock phosphate, pesticides and mechanization, have greatly
increased crop yields in the early 20th century. Increased supply of grains has led to cheaper livestock as
well. Further, global yield increases were experienced later in the 20th century when high-yield varieties
of common staple grains such as rice, wheat, and corn (maize) were introduced as a part of the Green
Revolution. The Green Revolution exported the technologies (including pesticides and synthetic nitrogen)
of the developed world to the developing world. Thomas Malthus famously predicted that the Earth
would not be able to support its growing population, but technologies such as the Green Revolution have
allowed the world to produce a surplus of food.[11]

Many governments have subsidized agriculture to ensure an adequate food supply. These agricultural
subsidies are often linked to the production of certain commodities such as wheat, corn (maize), rice,
soybeans, and milk. These subsidies, especially when instituted by developed countries have been noted
as protectionist, inefficient, and environmentally damaging.[12]

In the past century agriculture has been characterized by enhanced productivity, the use of synthetic
fertilizers and pesticides, selective breeding, mechanization, water contamination, and farm subsidies.
Proponents of organic farming such as Sir Albert Howard argued in the early 20th century that the

159
overuse of pesticides and synthetic fertilizers damages the long-term fertility of the soil. While this
feeling lay dormant for decades, as environmental awareness has increased in the 21st century there has
been a movement towards sustainable agriculture by some farmers, consumers, and policymakers.

In recent years there has been a backlash against perceived external environmental effects of mainstream
agriculture, particularly regarding water pollution,[13] resulting in the organic movement. One of the major
forces behind this movement has been the European Union, which first certified organic food in 1991 and
began reform of its Common Agricultural Policy (CAP) in 2005 to phase out commodity-linked farm
subsidies,[14] also known as decoupling. The growth of organic farming has renewed research in
alternative technologies such as integrated pest management and selective breeding. Recent mainstream
technological developments include genetically modified food.

In late 2007, several factors pushed up the price of grains consumed by humans as well as used to feed
poultry and dairy cows and other cattle, causing higher prices of wheat (up 58%), soybean (up 32%), and
maize (up 11%) over the year.[15][16] Food riots took place in several countries across the world.[17][18][19]
Contributing factors included drought in Australia and elsewhere, increasing demand for grain-fed animal
products from the growing middle classes of countries such as China and India, diversion of foodgrain to
biofuel production and trade restrictions imposed by several countries.

An epidemic of stem rust on wheat caused by race Ug99 is currently spreading across Africa and into
Asia and is causing major concern.[20][21][22] Approximately 40% of the world's agricultural land is
seriously degraded.[23] In Africa, if current trends of soil degradation continue, the continent might be able
to feed just 25% of its population by 2025, according to UNU's Ghana-based Institute for Natural
Resources in Africa.[24]

[edit] History

Main article: History of agriculture

A Sumerian harvester's sickle made from baked clay (ca. 3000 BC).

Agricultural practices such as irrigation, crop rotation, fertilizers, and pesticides were developed long ago,
but have made great strides in the past century. The history of agriculture has played a major role in
human history, as agricultural progress has been a crucial factor in worldwide socio-economic change.
Division of labor in agricultural societies made commonplace specializations rarely seen in hunter-
gatherer cultures. So, too, are arts such as epic literature and monumental architecture, as well as codified
legal systems. When farmers became capable of producing food beyond the needs of their own families,
others in their society were freed to devote themselves to projects other than food acquisition. Historians
and anthropologists have long argued that the development of agriculture made civilization possible.

160
Before the appearance of agriculture, the total world population probably never exceeded 15 million
inhabitants.[25]

[edit] Ancient origins

Further information: Neolithic Revolution

The Fertile Crescent of Western Asia, Egypt, and India were sites of the earliest planned sowing and
harvesting of plants that had previously been gathered in the wild. Independent development of
agriculture occurred in northern and southern China, Africa's Sahel, New Guinea and several regions of
the Americas.[26] The eight so-called Neolithic founder crops of agriculture appear: first emmer wheat and
einkorn wheat, then hulled barley, peas, lentils, bitter vetch, chick peas and flax.

By 7000 BC, small-scale agriculture reached Egypt. From at least 7000 BC the Indian subcontinent saw
farming of wheat and barley, as attested by archaeological excavation at Mehrgarh in Balochistan in what
is present day Pakistan. By 6000 BC, mid-scale farming was entrenched on the banks of the Nile. This, as
irrigation had not yet matured sufficiently. About this time, agriculture was developed independently in
the Far East, with rice, rather than wheat, as the primary crop. Chinese and Indonesian farmers went on to
domesticate taro and beans including mung, soy and azuki. To complement these new sources of
carbohydrates, highly organized net fishing of rivers, lakes and ocean shores in these areas brought in
great volumes of essential protein. Collectively, these new methods of farming and fishing inaugurated a
human population boom that dwarfed all previous expansions and continues today.

By 5000 BC, the Sumerians had developed core agricultural techniques including large-scale intensive
cultivation of land, monocropping, organized irrigation, and the use of a specialized labor force,
particularly along the waterway now known as the Shatt al-Arab, from its Persian Gulf delta to the
confluence of the Tigris and Euphrates. Domestication of wild aurochs and mouflon into cattle and sheep,
respectively, ushered in the large-scale use of animals for food/fiber and as beasts of burden. The
shepherd joined the farmer as an essential provider for sedentary and seminomadic societies. Maize,
manioc, and arrowroot were first domesticated in the Americas as far back as 5200 BC.[27]

The potato, tomato, pepper, squash, several varieties of bean, tobacco, and several other plants were also
developed in the Americas, as was extensive terracing of steep hillsides in much of Andean South
America. The Greeks and Romans built on techniques pioneered by the Sumerians, but made few
fundamentally new advances. Southern Greeks struggled with very poor soils, yet managed to become a
dominant society for years. The Romans were noted for an emphasis on the cultivation of crops for trade.

In the same region, a parallel agricultural revolution occurred, resulting in some of the most important
crops grown today. In Mesoamerica wild teosinte was transformed through human selection into the
ancestor of modern maize, more than 6000 years ago. It gradually spread across North America and was
the major crop of Native Americans at the time of European exploration.[28] Other Mesoamerican crops
include hundreds of varieties of squash and beans. Cocoa was also a major crop in domesticated Mexico
and Central America. The turkey, one of the most important meat birds, was probably domesticated in
Mexico or the U.S. Southwest. In the Andes region of South America the major domesticated crop was
potatoes, domesticated perhaps 5000 years ago. Large varieties of beans were domesticated, in South
America, as well as animals, including llamas, alpacas, and guinea pigs. Coca, still a major crop, was also
domesticated in the Andes.

A minor center of domestication, the indigenous people of the Eastern U.S. appear to have domesticated
numerous crops. Sunflowers, tobacco,[29] varieties of squash and Chenopodium, as well as crops no longer

161
grown, including marshelder and little barley were domesticated.[30][31] Other wild foods may have
undergone some selective cultivation, including wild rice and maple sugar. The most common varieties of
strawberry were domesticated from Eastern North America.[32]

By 3500 BC, the simplest form of the plough was developed, called the ard.[33] Before this period, simple
digging sticks or hoes were used. These tools would have also been easier to transport, which was a
benefit as people only stayed until the soil's nutrients were depleted. However, through excavations in
Mexico it has been found that the continuous cultivating of smaller pieces of land would also have been a
sustaining practice. Additional research in central Europe later revealed that agriculture was indeed
practiced at this method. For this method, ards were thus much more efficient than digging sticks. [34]

[edit] Middle Ages

This section requires expansion.

During the Middle Ages, farmers in North Africa, the Near East, and Europe began making use of
agricultural technologies including irrigation systems based on hydraulic and hydrostatic principles,
machines such as norias, water-raising machines, dams, and reservoirs. This combined with the invention
of a three-field system of crop rotation and the moldboard plow greatly improved agricultural efficiency.

In the European medieval period, agriculture was considered part of the set of seven mechanical arts.

[edit] Modern era

Further information: British Agricultural Revolution and Green Revolution

The Harvesters. Pieter Bruegel. 1565.

This photo from a 1921 encyclopedia shows a tractor ploughing an alfalfa field.

162
Satellite image of farming in Minnesota.

Infrared image of the above farms. To the untrained eye, this image appears a hodge-podge of colours
without any apparent purpose. But farmers are now trained to see yellows where crops are infested,
shades of red indicating crop health, black where flooding occurs, and brown where unwanted pesticides
land on chemical-free crops.[citation needed]

After 1492, a global exchange of previously local crops and livestock breeds occurred. Key crops
involved in this exchange included the tomato, maize, potato, manioc, cocoa bean and tobacco going from
the New World to the Old, and several varieties of wheat, spices, coffee, and sugar cane going from the
Old World to the New. The most important animal exportation from the Old World to the New were those
of the horse and dog (dogs were already present in the pre-Columbian Americas but not in the numbers
and breeds suited to farm work). Although not usually food animals, the horse (including donkeys and
ponies) and dog quickly filled essential production roles on western-hemisphere farms.

The potato became an important staple crop in northern Europe.[35] Since being introduced by Portuguese
in the 16th century,[36] maize and manioc have replaced traditional African crops as the continent's most
important staple food crops.[37]

By the early 19th century, agricultural techniques, implements, seed stocks and cultivated plants selected
and given a unique name because of its decorative or useful characteristics had so improved that yield per
land unit was many times that seen in the Middle Ages. Although there is a vast and interesting history of
crop cultivation before the dawn of the 20th century, there is little question that the work of Charles

163
Darwin and Gregor Mendel created the scientific foundation for plant breeding that led to its explosive
impact over the past 150 years.[38]

With the rapid rise of mechanization in the late 19th century and the 20th century, particularly in the form
of the tractor, farming tasks could be done with a speed and on a scale previously impossible. These
advances have led to efficiencies enabling certain modern farms in the United States, Argentina, Israel,
Germany, and a few other nations to output volumes of high-quality produce per land unit at what may be
the practical limit.

The Haber-Bosch method for synthesizing ammonium nitrate represented a major breakthrough and
allowed crop yields to overcome previous constraints. In the past century agriculture has been
characterized by enhanced productivity, the substitution of labor for synthetic fertilizers and pesticides,
water pollution, and farm subsidies. In recent years there has been a backlash against the external
environmental effects of conventional agriculture, resulting in the organic movement.

The cereals rice, corn, and wheat provide 60% of human food supply.[39] Between 1700 and 1980, "the
total area of cultivated land worldwide increased 466%" and yields increased dramatically, particularly
because of selectively bred high-yielding varieties, fertilizers, pesticides, irrigation, and machinery.[39] For
example, irrigation increased corn yields in eastern Colorado by 400 to 500% from 1940 to 1997.[39]

However, concerns have been raised over the sustainability of intensive agriculture. Intensive agriculture
has become associated with decreased soil quality in India and Asia, and there has been increased concern
over the effects of fertilizers and pesticides on the environment, particularly as population increases and
food demand expands. The monocultures typically used in intensive agriculture increase the number of
pests, which are controlled through pesticides. Integrated pest management (IPM), which "has been
promoted for decades and has had some notable successes" has not significantly affected the use of
pesticides because policies encourage the use of pesticides and IPM is knowledge-intensive.[39]

Although the "Green Revolution" significantly increased rice yields in Asia, yield increases have not
occurred in the past 15–20 years.[40] The genetic "yield potential" has increased for wheat, but the yield
potential for rice has not increased since 1966, and the yield potential for maize has "barely increased in
35 years".[40] It takes a decade or two for herbicide-resistant weeds to emerge, and insects become
resistant to insecticides within about a decade.[40] Crop rotation helps to prevent resistances.[40]

Agricultural exploration expeditions, since the late 19th century, have been mounted to find new species
and new agricultural practices in different areas of the world. Two early examples of expeditions include
Frank N. Meyer's fruit- and nut-collecting trip to China and Japan from 1916-1918[41] and the Dorsett-
Morse Oriental Agricultural Exploration Expedition to China, Japan, and Korea from 1929-1931 to
collect soybean germplasm to support the rise in soybean agriculture in the United States.[42]

In 2009, the agricultural output of China was the largest in the world, followed by the European Union,
India and the United States, according to the International Monetary Fund (see below). Economists
measure the total factor productivity of agriculture and by this measure agriculture in the United States is
roughly 2.6 times more productive than it was in 1948.[43]

Six countries - the US, Canada, France, Australia, Argentina and Thailand - supply 90% of grain
exports.[44] The United States controls almost half of world grain exports.[44] Water deficits, which are
already spurring heavy grain imports in numerous middle-sized countries, including Algeria, Iran, Egypt,
and Mexico,[45] may soon do the same in larger countries, such as China or India.[46]

164
[edit] Crop production systems

Farmers work inside a rice field in Andhra Pradesh, India.

Cropping systems vary among farms depending on the available resources and constraints; geography and
climate of the farm; government policy; economic, social and political pressures; and the philosophy and
culture of the farmer.[47][48] Shifting cultivation (or slash and burn) is a system in which forests are burnt,
releasing nutrients to support cultivation of annual and then perennial crops for a period of several
years.[49]

Then the plot is left fallow to regrow forest, and the farmer moves to a new plot, returning after many
more years (10-20). This fallow period is shortened if population density grows, requiring the input of
nutrients (fertilizer or manure) and some manual pest control. Annual cultivation is the next phase of
intensity in which there is no fallow period. This requires even greater nutrient and pest control inputs.

Further industrialization lead to the use of monocultures, when one cultivar is planted on a large acreage.
Because of the low biodiversity, nutrient use is uniform and pests tend to build up, necessitating the
greater use of pesticides and fertilizers.[48] Multiple cropping, in which several crops are grown
sequentially in one year, and intercropping, when several crops are grown at the same time are other kinds
of annual cropping systems known as polycultures.[49]

In tropical environments, all of these cropping systems are practiced. In subtropical and arid
environments, the timing and extent of agriculture may be limited by rainfall, either not allowing multiple
annual crops in a year, or requiring irrigation. In all of these environments perennial crops are grown
(coffee, chocolate) and systems are practiced such as agroforestry. In temperate environments, where
ecosystems were predominantly grassland or prairie, highly productive annual cropping is the dominant
farming system.[49]

The last century has seen the intensification, concentration and specialization of agriculture, relying upon
new technologies of agricultural chemicals (fertilizers and pesticides), mechanization, and plant breeding
(hybrids and GMO's). In the past few decades, a move towards sustainability in agriculture has also
developed, integrating ideas of socio-economic justice and conservation of resources and the environment
within a farming system.[50][51] This has led to the development of many responses to the conventional
agriculture approach, including organic agriculture, urban agriculture, community supported agriculture,
ecological or biological agriculture, integrated farming and holistic management, as well as an increased
trend towards agricultural diversification.

[edit] Crop statistics

165
Important categories of crops include grains and pseudograins, pulses (legumes), forage, and fruits and
vegetables. Specific crops are cultivated in distinct growing regions throughout the world. In millions of
metric tons, based on FAO estimate.

Top agricultural products, by crop types


(million tonnes) 2004 data
Cereals 2,263
Vegetables and melons 866
Roots and Tubers 715
Milk 619
Fruit 503
Meat 259
Oilcrops 133
Fish (2001 estimate) 130
Eggs 63
Pulses 60
Vegetable Fiber 30
Source:
Food and Agriculture Organization (FAO)[52]
Top agricultural products, by individual crops
(million tonnes) 2004 data
Sugar Cane 1,324
Maize 721
Wheat 627
Rice 605
Potatoes 328
Sugar Beet 249
Soybean 204
Oil Palm Fruit 162
Barley 154
Tomato 120
Source:
Food and Agriculture Organization (FAO)[52]

[edit] Livestock production systems

Main article: Livestock

166
Ploughing rice paddies with water buffalo, in Indonesia.

Animals, including horses, mules, oxen, camels, llamas, alpacas, and dogs, are often used to help cultivate
fields, harvest crops, wrangle other animals, and transport farm products to buyers. Animal husbandry not
only refers to the breeding and raising of animals for meat or to harvest animal products (like milk, eggs,
or wool) on a continual basis, but also to the breeding and care of species for work and companionship.
Livestock production systems can be defined based on feed source, as grassland - based, mixed, and
landless.[53]

Grassland based livestock production relies upon plant material such as shrubland, rangeland, and
pastures for feeding ruminant animals. Outside nutrient inputs may be used, however manure is returned
directly to the grassland as a major nutrient source. This system is particularly important in areas where
crop production is not feasible because of climate or soil, representing 30-40 million pastoralists.[49]
Mixed production systems use grassland, fodder crops and grain feed crops as feed for ruminant and
monogastic (one stomach; mainly chickens and pigs) livestock. Manure is typically recycled in mixed
systems as a fertilizer for crops. Approximately 68% of all agricultural land is permanent pastures used in
the production of livestock.[54]

Landless systems rely upon feed from outside the farm, representing the de-linking of crop and livestock
production found more prevalently in OECD member countries. In the U.S., 70% of the grain grown is
fed to animals on feedlots.[49] Synthetic fertilizers are more heavily relied upon for crop production and
manure utilization becomes a challenge as well as a source for pollution.

[edit] Production practices

Road leading across the farm allows machinery access to the farm for production practices.

Tillage is the practice of plowing soil to prepare for planting or for nutrient incorporation or for pest
control. Tillage varies in intensity from conventional to no-till. It may improve productivity by warming
the soil, incorporating fertilizer and controlling weeds, but also renders soil more prone to erosion,

167
triggers the decomposition of organic matter releasing CO2, and reduces the abundance and diversity of
soil organisms.[55][56]

Pest control includes the management of weeds, insects/mites, and diseases. Chemical (pesticides),
biological (biocontrol), mechanical (tillage), and cultural practices are used. Cultural practices include
crop rotation, culling, cover crops, intercropping, composting, avoidance, and resistance. Integrated pest
management attempts to use all of these methods to keep pest populations below the number which would
cause economic loss, and recommends pesticides as a last resort.[57]

Nutrient management includes both the source of nutrient inputs for crop and livestock production, and
the method of utilization of manure produced by livestock. Nutrient inputs can be chemical inorganic
fertilizers, manure, green manure, compost and mined minerals.[58] Crop nutrient use may also be
managed using cultural techniques such as crop rotation or a fallow period.[59][60] Manure is used either by
holding livestock where the feed crop is growing, such as in managed intensive rotational grazing, or by
spreading either dry or liquid formulations of manure on cropland or pastures.

Water management is where rainfall is insufficient or variable, which occurs to some degree in most
regions of the world.[49] Some farmers use irrigation to supplement rainfall. In other areas such as the
Great Plains in the U.S. and Canada, farmers use a fallow year to conserve soil moisture to use for
growing a crop in the following year.[61] Agriculture represents 70% of freshwater use worldwide.[62]

[edit] Processing, distribution, and marketing

Main article: Food processing


Main article: Agricultural marketing

In the United States, food costs attributed to processing, distribution, and marketing have risen while the
costs attributed to farming have declined. This is related to the greater efficiency of farming, combined
with the increased level of value addition (e.g. more highly processed products) provided by the supply
chain. From 1960 to 1980 the farm share was around 40%, but by 1990 it had declined to 30% and by
1998, 22.2%. Market concentration has increased in the sector as well, with the top 20 food manufacturers
accounting for half the food-processing value in 1995, over double that produced in 1954. As of 2000 the
top six US supermarket groups had 50% of sales compared to 32% in 1992. Although the total effect of
the increased market concentration is likely increased efficiency, the changes redistribute economic
surplus from producers (farmers) and consumers, and may have negative implications for rural
communities.[63]

[edit] Crop alteration and biotechnology

Main article: Plant breeding

168
Tractor and Chaser bin.

Crop alteration has been practiced by humankind for thousands of years, since the beginning of
civilization. Altering crops through breeding practices changes the genetic make-up of a plant to develop
crops with more beneficial characteristics for humans, for example, larger fruits or seeds, drought-
tolerance, or resistance to pests. Significant advances in plant breeding ensued after the work of geneticist
Gregor Mendel. His work on dominant and recessive alleles gave plant breeders a better understanding of
genetics and brought great insights to the techniques utilized by plant breeders. Crop breeding includes
techniques such as plant selection with desirable traits, self-pollination and cross-pollination, and
molecular techniques that genetically modify the organism.[64]

Domestication of plants has, over the centuries increased yield, improved disease resistance and drought
tolerance, eased harvest and improved the taste and nutritional value of crop plants. Careful selection and
breeding have had enormous effects on the characteristics of crop plants. Plant selection and breeding in
the 1920s and 1930s improved pasture (grasses and clover) in New Zealand. Extensive X-ray an
ultraviolet induced mutagenesis efforts (i.e. primitive genetic engineering) during the 1950s produced the
modern commercial varieties of grains such as wheat, corn (maize) and barley.[65][66]

The Green Revolution popularized the use of conventional hybridization to increase yield many folds by
creating "high-yielding varieties". For example, average yields of corn (maize) in the USA have increased
from around 2.5 tons per hectare (t/ha) (40 bushels per acre) in 1900 to about 9.4 t/ha (150 bushels per
acre) in 2001. Similarly, worldwide average wheat yields have increased from less than 1 t/ha in 1900 to
more than 2.5 t/ha in 1990. South American average wheat yields are around 2 t/ha, African under 1 t/ha,
Egypt and Arabia up to 3.5 to 4 t/ha with irrigation. In contrast, the average wheat yield in countries such
as France is over 8 t/ha. Variations in yields are due mainly to variation in climate, genetics, and the level
of intensive farming techniques (use of fertilizers, chemical pest control, growth control to avoid
lodging).[67][68][69]

[edit] Genetic engineering

Main article: Genetic Engineering

Genetically Modified Organisms (GMO) are organisms whose genetic material has been altered by
genetic engineering techniques generally known as recombinant DNA technology. Genetic engineering
has expanded the genes available to breeders to utilize in creating desired germlines for new crops. After
mechanical tomato-harvesters were developed in the early 1960s, agricultural scientists genetically
modified tomatoes to be more resistant to mechanical handling. More recently, genetic engineering is
being employed in various parts of the world, to create crops with other beneficial traits.

[edit] Herbicide-tolerant GMO Crops

Roundup Ready seed has a herbicide resistant gene implanted into its genome that allows the plants to
tolerate exposure to glyphosate. Roundup is a trade name for a glyphosate-based product, which is a
systemic, nonselective herbicide used to kill weeds. Roundup Ready seeds allow the farmer to grow a
crop that can be sprayed with glyphosate to control weeds without harming the resistant crop. Herbicide-
tolerant crops are used by farmers worldwide. Today, 92% of soybean acreage in the US is planted with
genetically modified herbicide-tolerant plants.[70]

169
With the increasing use of herbicide-tolerant crops, comes an increase in the use of glyphosate-based
herbicide sprays. In some areas glyphosate resistant weeds have developed, causing farmers to switch to
other herbicides.[71][72] Some studies also link widespread glyphosate usage to iron deficiencies in some
crops, which is both a crop production and a nutritional quality concern, with potential economic and
health implications.[73]

[edit] Insect-resistant GMO Crops

Other GMO crops used by growers include insect-resistant crops, which have a gene from the soil
bacterium Bacillus thuringiensis (Bt), which produces a toxin specific to insects. These crops protect
plants from damage by insects; one such crop is Starlink. Another is cotton, which accounts for 63% of
US cotton acreage.[74]

Some believe that similar or better pest-resistance traits can be acquired through traditional breeding
practices, and resistance to various pests can be gained through hybridization or cross-pollination with
wild species. In some cases, wild species are the primary source of resistance traits; some tomato cultivars
that have gained resistance to at least nineteen diseases did so through crossing with wild populations of
tomatoes.[75]

[edit] Costs and benefits of GMOs

Genetic engineers may someday develop transgenic plants which would allow for irrigation, drainage,
conservation, sanitary engineering, and maintaining or increasing yields while requiring fewer fossil fuel
derived inputs than conventional crops. Such developments would be particularly important in areas
which are normally arid and rely upon constant irrigation, and on large scale farms. However, genetic
engineering of plants has proven to be controversial. Many issues surrounding food security and
environmental impacts have risen regarding GMO practices. For example, GMOs are questioned by some
ecologists and economists concerned with GMO practices such as terminator seeds,[76][77] which is a
genetic modification that creates sterile seeds. Terminator seeds are currently under strong international
opposition and face continual efforts of global bans.[78]

Another controversial issue is the patent protection given to companies that develop new types of seed
using genetic engineering. Since companies have intellectual ownership of their seeds, they have the
power to dictate terms and conditions of their patented product. Currently, ten seed companies control
over two-thirds of the global seed sales.[79] Vandana Shiva argues that these companies are guilty of
biopiracy by patenting life and exploiting organisms for profit[80] Farmers using patented seed are
restricted from saving seed for subsequent plantings, which forces farmers to buy new seed every year.
Since seed saving is a traditional practice for many farmers in both developing and developed countries,
GMO seeds legally bind farmers to change their seed saving practices to buying new seed every
year.[71][80]

Locally adapted seeds are an essential heritage that has the potential to be lost with current hybridized
crops and GMOs. Locally adapted seeds, also called land races or crop eco-types, are important because
they have adapted over time to the specific microclimates, soils, other environmental conditions, field
designs, and ethnic preference indigenous to the exact area of cultivation.[81] Introducing GMOs and
hybridized commercial seed to an area brings the risk of cross-pollination with local land races Therefore,
GMOs pose a threat to the sustainability of land races and the ethnic heritage of cultures. Once seed
contains transgenic material, it becomes subject to the conditions of the seed company that owns the
patent of the transgenic material.[82]

170
[edit] Modern agriculture

This section has multiple issues. Please help improve it or discuss these issues on the talk
page.

 Its neutrality is disputed. Tagged since August 2010.


 It may not present a worldwide view of the subject. Tagged since August 2010.
 It is written like an advertisement and needs to be rewritten from a neutral point of
view. Tagged since August 2010.

Modern agriculture is a term used to describe the wide majority of production practices employed by
America’s farmers. The term depicts the push for innovation, stewardship and advancements continually
made by growers to sustainably produce higher-quality products with a reduced environmental impact.
Intensive scientific research and robust investment in modern agriculture during the past 50 years has
helped farmers double food production while essentially freezing the footprint of total cultivated
farmland.[83][84]

[edit] Safety

The agriculture industry works with government agencies and other organizations to ensure that farmers
have access to the technologies required to support modern agriculture practices. Farmers are supported
by education and certification programs that ensure they apply agricultural practices with care and only
when required.[clarification needed]

[edit] Sustainability

Technological advancements help provide farmers with tools and resources to help reduce their
environmental footprint and to make farming more sustainable.[85]

New technologies have given rise to innovations like conservation tillage, a farming process which helps
prevent land loss to erosion, water pollution and enhances carbon sequestration.[86]

The World Bank, the Bill & Melinda Gates Foundation and others have noted that integrated crop
management is based on agro-ecological principles and can increase yields while reducing environmental
damage.

[edit] Affordability

The goal of modern agriculture practices is to help farmers provide an affordable supply of food to meet
the demands of a growing population.[87] With modern agriculture, more crops can be grown on less land
allowing farmers to provide an increased supply of food at an affordable price.

[edit] Food safety, labeling and regulation

Food security issues also coincide with food safety and food labeling concerns. Currently a global treaty,
the BioSafety Protocol, regulates the trade of GMOs. The EU currently requires all GMO foods to be
labeled, whereas the US does not require transparent labeling of GMO foods. Since there are still

171
questions regarding the safety and risks associated with GMO foods, some believe the public should have
the freedom to choose and know what they are eating and require all GMO products to be labeled.[88]

The Food and Agriculture Organization of the United Nations (FAO) leads international efforts to defeat
hunger and provides a neutral forum where nations meet as equals to negotiate agreements and debate
food policy and the regulation of agriculture. According to Dr. Samuel Jutzi, director of FAO's animal
production and health division, lobbying by "powerful" big food corporations has stopped reforms that
would improve human health and the environment. The "real, true issues are not being addressed by the
political process because of the influence of lobbyists, of the true powerful entities," he said, speaking at
the Compassion in World Farming annual forum. For example, recent proposals for a voluntary code of
conduct for the livestock industry that would have provided incentives for improving standards for health,
and environmental regulations, such as the number of animals an area of land can support without long-
term damage, were successfully defeated due to large food company pressure.[89]

[edit] Environmental impact

Main article: Environmental issues with agriculture

Agriculture imposes external costs upon society through pesticides, nutrient runoff, excessive water
usage, and assorted other problems. A 2000 assessment of agriculture in the UK determined total external
costs for 1996 of £2,343 million, or £208 per hectare.[90] A 2005 analysis of these costs in the USA
concluded that cropland imposes approximately $5 to 16 billion ($30 to $96 per hectare), while livestock
production imposes $714 million.[91] Both studies concluded that more should be done to internalize
external costs, and neither included subsidies in their analysis, but noted that subsidies also influence the
cost of agriculture to society. Both focused on purely fiscal impacts. The 2000 review included reported
pesticide poisonings but did not include speculative chronic effects of pesticides, and the 2004 review
relied on a 1992 estimate of the total impact of pesticides.

A key player who is credited to saving billions of lives because of his revolutionary work in developing
new agricultural techniques is Norman Borlaug. His transformative work brought high-yield crop
varieties to developing countries and earned him an unofficial title as the father of the Green Revolution.

[edit] Livestock issues

A senior UN official and co-author of a UN report detailing this problem, Henning Steinfeld, said
"Livestock are one of the most significant contributors to today's most serious environmental
problems".[92] Livestock production occupies 70% of all land used for agriculture, or 30% of the land
surface of the planet. It is one of the largest sources of greenhouse gases, responsible for 18% of the
world's greenhouse gas emissions as measured in CO2 equivalents. By comparison, all transportation
emits 13.5% of the CO2. It produces 65% of human-related nitrous oxide (which has 296 times the global
warming potential of CO2,) and 37% of all human-induced methane (which is 23 times as warming as
CO2. It also generates 64% of the ammonia, which contributes to acid rain and acidification of
ecosystems. Livestock expansion is cited as a key factor driving deforestation, in the Amazon basin 70%
of previously forested area is now occupied by pastures and the remainder used for feedcrops. [93] Through
deforestation and land degradation, livestock is also driving reductions in biodiversity.

[edit] Land transformation and degradation

Land transformation, the use of land to yield goods and services, is the most substantial way humans alter
the Earth's ecosystems, and is considered the driving force in the loss of biodiversity. Estimates of the

172
amount of land transformed by humans vary from 39–50%.[94] Land degradation, the long-term decline in
ecosystem function and productivity, is estimated to be occurring on 24% of land worldwide, with
cropland overrepresented.[95] The UN-FAO report cites land management as the driving factor behind
degradation and reports that 1.5 billion people rely upon the degrading land. Degradation can be
deforestation, desertification, soil erosion, mineral depletion, or chemical degradation (acidification and
salinization).[49]

[edit] Eutrophication

Eutrophication, excessive nutrients in aquatic ecosystems resulting in algal blooms and anoxia, leads to
fish kills, loss of biodiversity, and renders water unfit for drinking and other industrial uses. Excessive
fertilization and manure application to cropland, as well as high livestock stocking densities cause
nutrient (mainly nitrogen and phosphorus) runoff and leaching from agricultural land. These nutrients are
major nonpoint pollutants contributing to eutrophication of aquatic ecosystems.[96]

[edit] Pesticides

Pesticide use has increased since 1950 to 2.5 million tons annually worldwide, yet crop loss from pests
has remained relatively constant.[97] The World Health Organization estimated in 1992 that 3 million
pesticide poisonings occur annually, causing 220,000 deaths.[98] Pesticides select for pesticide resistance
in the pest population, leading to a condition termed the 'pesticide treadmill' in which pest resistance
warrants the development of a new pesticide.[99]

An alternative argument is that the way to 'save the environment' and prevent famine is by using
pesticides and intensive high yield farming, a view exemplified by a quote heading the Center for Global
Food Issues website: 'Growing more per acre leaves more land for nature'.[100][101] However, critics argue
that a trade-off between the environment and a need for food is not inevitable,[102] and that pesticides
simply replace good agronomic practices such as crop rotation.[99]

[edit] Climate Change

Climate change has the potential to affect agriculture through changes in temperature, rainfall (timing and
quantity), CO2, solar radiation and the interaction of these elements.[49][103] Agriculture can both mitigate
or worsen global warming. Some of the increase in CO2 in the atmosphere comes from the decomposition
of organic matter in the soil, and much of the methane emitted into the atmosphere is caused by the
decomposition of organic matter in wet soils such as rice paddies.[104] Further, wet or anaerobic soils also
lose nitrogen through denitrification, releasing the greenhouse gas nitric oxide.[105] Changes in
management can reduce the release of these greenhouse gases, and soil can further be used to sequester
some of the CO2 in the atmosphere.[104]

[edit] International economics and agriculture

See also: Agricultural subsidy

Differences in economic development, population density and culture mean that the farmers of the world
operate under very different conditions.

A US cotton farmer may receive US$230[106] in government subsidies per acre planted (in 2003), while
farmers in Mali and other third-world countries do without. When prices decline, the heavily subsidized

173
US farmer is not forced to reduce his output, making it difficult for cotton prices to rebound, but his Mali
counterpart may go broke in the meantime.

A livestock farmer in South Korea can calculate with a (highly subsidized) sales price of US$1300 for a
calf produced.[107] A South American Mercosur country rancher calculates with a calf's sales price of
US$120–200 (both 2008 figures).[108] With the former, scarcity and high cost of land is compensated with
public subsidies, the latter compensates absence of subsidies with economics of scale and low cost of
land.

In the Peoples Republic of China, a rural household's productive asset may be one hectare of
farmland.[109] In Brazil, Paraguay and other countries where local legislature allows such purchases,
international investors buy thousands of hectares of farmland or raw land at prices of a few hundred US$
per hectare.[110][111][112]

[edit] List of countries by agricultural output

Main article: List of countries by GDP sector composition

Global agricultural output in 2005.

Below is a list of countries by agricultural output in 2009. Output is in millions of US$.

Rank Country Output


1 China 520,352
— European Union 312,498
2 India 210,116
3 United States 171,075
4 Brazil 96,016
5 Japan 81,089
6 Russia 57,774
7 Spain 48,313
8 France 48,167
9 Australia 40,885
10 Italy 38,129

[edit] Energy and agriculture

174
Since the 1940s, agricultural productivity has increased dramatically, due largely to the increased use of
energy-intensive mechanization, fertilizers and pesticides. The vast majority of this energy input comes
from fossil fuel sources.[113] Between 1950 and 1984, the Green Revolution transformed agriculture
around the globe, with world grain production increasing by 250%[114][115] as world population doubled.
Modern agriculture's heavy reliance on petrochemicals and mechanization has raised concerns that oil
shortages could increase costs and reduce agricultural output, causing food shortages.

Agriculture and food system share (%) of total energy


consumption by three industrialized nations
Agriculture Food
Country Year
(direct & indirect) system
United Kingdom[116] 2005 1.9 11
[117]
United States of America 1996 2.1 10
[118]
United States of America 2002 2.0 14
[119]
Sweden 2000 2.5 13

Modern or industrialized agriculture is dependent on fossil fuels in two fundamental ways: 1) direct
consumption on the farm and 2) indirect consumption to manufacture inputs used on the farm. Direct
consumption includes the use of lubricants and fuels to operate farm vehicles and machinery; and use of
gas, liquid propane, and electricity to power dryers, pumps, lights, heaters, and coolers. American farms
directly consumed about 1.2 exajoules (1.1 quadrillion BTU) in 2002, or just over 1 percent of the
nation's total energy.[120]

Indirect consumption is mainly oil and natural gas used to manufacture fertilizers and pesticides, which
accounted for 0.6 exajoules (0.6 quadrillion BTU) in 2002.[120] The energy used to manufacture farm
machinery is also a form of indirect agricultural energy consumption, but it is not included in USDA
estimates of U.S. agricultural energy use. Together, direct and indirect consumption by U.S. farms
accounts for about 2 percent of the nation's energy use. Direct and indirect energy consumption by U.S.
farms peaked in 1979, and has gradually declined over the past 30 years.[120]

Food systems encompass not just agricultural production, but also off-farm processing, packaging,
transporting, marketing, consumption, and disposal of food and food-related items. Agriculture accounts
for less than one-fifth of food system energy use in the United States.[117][118]

Oil shortages could impact this food supply. Some farmers using modern organic-farming methods have
reported yields as high as those available from conventional farming without the use of synthetic
fertilizers and pesticides. However, the reconditioning of soil to restore nutrients lost during the use of
monoculture agriculture techniques made possible by petroleum-based technology takes
time.[121][122][123][124]

In 2007, higher incentives for farmers to grow non-food biofuel crops[125] combined with other factors
(such as over-development of former farm lands, rising transportation costs, climate change, growing
consumer demand in China and India, and population growth)[126] to cause food shortages in Asia, the
Middle East, Africa, and Mexico, as well as rising food prices around the globe.[127][128] As of December
2007, 37 countries faced food crises, and 20 had imposed some sort of food-price controls. Some of these
shortages resulted in food riots and even deadly stampedes.[17][18][19]

175
The biggest fossil fuel input to agriculture is the use of natural gas as a hydrogen source for the Haber-
Bosch fertilizer-creation process.[129] Natural gas is used because it is the cheapest currently available
source of hydrogen.[130][131] When oil production becomes so scarce that natural gas is used as a partial
stopgap replacement, and hydrogen use in transportation increases, natural gas will become much more
expensive. If the Haber Process is unable to be commercialized using renewable energy (such as by
electrolysis) or if other sources of hydrogen are not available to replace the Haber Process, in amounts
sufficient to supply transportation and agricultural needs, this major source of fertilizer would either
become extremely expensive or unavailable. This would either cause food shortages or dramatic rises in
food prices.

[edit] Mitigation of effects of petroleum shortages

One effect oil shortages could have on agriculture is a full return to organic agriculture. In light of peak-
oil concerns, organic methods are more sustainable than contemporary practices because they use no
petroleum-based pesticides, herbicides, or fertilizers. Some farmers using modern organic-farming
methods have reported yields as high as those available from conventional farming.[121][122][123][124] Organic
farming may however be more labor-intensive and would require a shift of the workforce from urban to
rural areas.[132]

It has been suggested that rural communities might obtain fuel from the biochar and synfuel process,
which uses agricultural waste to provide charcoal fertilizer, some fuel and food, instead of the normal
food vs fuel debate. As the synfuel would be used on-site, the process would be more efficient and might
just provide enough fuel for a new organic-agriculture fusion.[133][134]

It has been suggested that some transgenic plants may some day be developed which would allow for
maintaining or increasing yields while requiring fewer fossil-fuel-derived inputs than conventional
crops.[135] The possibility of success of these programs is questioned by ecologists and economists
concerned with unsustainable GMO practices such as terminator seeds,[136][137] and a January 2008 report
shows that GMO practices "fail to deliver environmental, social and economic benefits."[138]

While there has been some research on sustainability using GMO crops, at least one hyped and prominent
multi-year attempt by Monsanto Company has been unsuccessful, though during the same period
traditional breeding techniques yielded a more sustainable variety of the same crop. [139] Additionally, a
survey by the bio-tech industry of subsistence farmers in Africa to discover what GMO research would
most benefit sustainable agriculture only identified non-transgenic issues as areas needing to be
addressed.[140] Nevertheless, some governments in Africa continue to view investments in new transgenic
technologies as an essential component of efforts to improve sustainability.[141]

[edit] Electrical energy efficiency on farms

Main article: Electrical energy efficiency on United States farms

[edit] Policy

Main article: Agricultural policy

Agricultural policy focuses on the goals and methods of agricultural production. At the policy level,
common goals of agriculture include:

 Conservation

176
 Economic stability
 Environmental sustainability
 Food quality: Ensuring that the food supply is of a consistent and known quality.
 Food safety: Ensuring that the food supply is free of contamination.
 Food security: Ensuring that the food supply meets the population's needs.[142][143]
 Poverty Reduction

Wheat

From Wikipedia, the free encyclopedia


Jump to: navigation, search
This article is about the plant. For other uses, see Wheat (disambiguation).

Wheat

Scientific classification

Kingdom: Plantae

(unranked): Angiosperms

(unranked): Monocots

(unranked): Commelinids

Order: Poales

Family: Poaceae

Subfamily: Pooideae

177
Tribe: Triticeae

Triticum
Genus:
L.

Wheat (Triticum spp.)[1] is a grass, originally from the Fertile Crescent region of the Near East, but now
cultivated worldwide. In 2007 world production of wheat was 607 million tons, making it the third most-
produced cereal after maize (784 million tons) and rice (651 million tons).[2] Globally, wheat is the
leading source of vegetable protein in human food, having a higher protein content than either maize
(corn) or rice, the other major cereals. In terms of total production tonnages used for food, it is currently
second to rice as the main human food crop, and ahead of maize, after allowing for maize's more
extensive use in animal feeds.

Wheat was a key factor enabling the emergence of city-based societies at the start of civilization because
it was one of the first crops that could be easily cultivated on a large scale, and had the additional
advantage of yielding a harvest that provides long-term storage of food. Wheat grain is a staple food used
to make flour for leavened, flat and steamed breads, biscuits, cookies, cakes, breakfast cereal, pasta,
noodles, couscous[3] and for fermentation to make beer,[4] other alcoholic beverages,[5] or biofuel.[6]

Wheat is planted to a limited extent as a forage crop for livestock, and its straw can be used as a
construction material for roofing thatch.[7][8] The husk of the grain, separated when milling white flour, is
bran. Wheat germ is the embryo portion of the wheat kernel. It is a concentrated source of vitamins,
minerals, and protein, and is sustained by the larger, starch storage region of the kernel—the endosperm.

[edit] History

Wheat is one of the first cereals known to have been domesticated, and wheat's ability to self-pollinate
greatly facilitated the selection of many distinct domesticated varieties. The archaeological record
suggests that this first occurred in the regions known as the Fertile Crescent, and the Nile Delta. These
include southeastern parts of Turkey, Syria, the Levant, Israel, and Egypt. Recent findings narrow the first
domestication of wheat down to a small region of southeastern Turkey,[9] and domesticated Einkorn wheat
at Nevalı Çori—40 miles (64 km) northwest of Gobekli Tepe in Turkey—has been dated to 9,000 B.C.[10]
However evidence for the exploitation of wild barley has been dated to 23,000 B.C. and some say this is
also true of pre-domesticated wheat.[11]

[edit] Wheat origins near Turkey's Karacadag Mountains

Genetic analysis of wild einkorn wheat suggests that it was first grown in the Karacadag Mountains in
southeastern Turkey. Dated archeological remains of einkorn wheat in settlement sites near this region,
including those at Abu Hureyra in Syria, confirms the domestication of einkorn near the Karacadag
Mountain Range. The earliest carbon-14 date for the einkorn wheat remains at Abu Hureyra is 7800 to
7500 years BCE.[12] Recent genetic and archeological discoveries indicate that both emmer wheat and
durum (hard pasta wheat) also originated from this same Karacadag region of southeastern Turkey.
Remains of harvested emmer from several sites near the Karacadag Range have been dated to between
8800 and 8400 BCE, that is, in the Neolithic period.[13]

178
Cultivation and repeated harvesting and sowing of the grains of wild grasses led to the creation of
domestic strains, as mutant forms ('sports') of wheat were preferentially chosen by farmers. In
domesticated wheat, grains are larger, and the seeds (spikelets) remain attached to the ear by a toughened
rachis during harvesting. In wild strains, a more fragile rachis allows the ear to easily shatter and disperse
the spikelets.[14] Selection for these traits by farmers might not have been deliberately intended, but
simply have occurred because these traits made gathering the seeds easier; nevertheless such 'incidental'
selection was an important part of crop domestication. As the traits that improve wheat as a food source
also involve the loss of the plant's natural seed dispersal mechanisms, highly domesticated strains of
wheat cannot survive in the wild.

Cultivation of wheat began to spread beyond the Fertile Crescent after about 8000 BCE. Jared Diamond
traces the spread of cultivated emmer wheat starting in the Fertile Crescent about 8500 BCE, reaching
Greece, Cyprus and India by 6500 BCE, Egypt shortly after 6000 BCE, and Germany and Spain by 5000
BCE.[15] "The early Egyptians were developers of bread and the use of the oven and developed baking
into one of the first large-scale food production industries." [16] By 3000 BCE, wheat had reached
England, and Scandinavia. A millennium later it reached China.

Wheat spread through out Europe and in England, thatch was used for roofing in the bronze age, and was
in common use until the late 19th century.[17]

[edit] Farming techniques

Technological advances in soil preparation and seed placement at planting time, use of crop rotation and
fertilizers to improve plant growth, and advances in harvesting methods have all combined to promote
wheat as a viable crop. Agricultural cultivation using horse collar leveraged plows (at about 3000 BCE)
was one of the first innovations that increased productivity. Much later, when the use of seed drills
replaced broadcasting sowing of seed in the 18th century, another great increase in productivity occurred.
Yields of wheat per unit area increased as methods of crop rotation were applied to long cultivated land,
and the use of fertilizers became widespread. Improved agricultural husbandry has more recently included
threshing machines and reaping machines (the 'combine harvester'), tractor-drawn cultivators and
planters, and better varieties (see Green Revolution and Norin 10 wheat). Great expansions of wheat
production occurred as new arable land was farmed in the Americas and Australia in the 19th and 20th
centuries.

[edit] Genetics

Spikelets of a hulled wheat, einkorn

179
Wheat genetics is more complicated than that of most other domesticated species. Some wheat species are
diploid, with two sets of chromosomes, but many are stable polyploids, with four sets of chromosomes
(tetraploid) or six (hexaploid).[18]

 Einkorn wheat (T. monococcum) is diploid (AA, two complements of seven chromosomes,
2n=14).[1]
 Most tetraploid wheats (e.g. emmer and durum wheat) are derived from wild emmer, T.
dicoccoides. Wild emmer is itself the result of a hybridization between two diploid wild grasses,
T. urartu and a wild goatgrass such as Aegilops searsii or Ae. speltoides. The unknown grass has
never been identified among now surviving wild grasses, but the closest living relative is
Aegilops speltoides.[citation needed] The hybridization that formed wild emmer (AABB) occurred in
the wild, long before domestication,[18] and was driven by natural selection.

 Hexaploid wheats evolved in farmers' fields. Either domesticated emmer or durum wheat
hybridized with yet another wild diploid grass (Aegilops cylindrica) to make the hexaploid
wheats, spelt wheat and bread wheat.[18] These have three sets of paired chromosomes, three
times as many as in diploid wheat.

The presence of certain versions of wheat genes has been important for crop yields. Apart from mutant
versions of genes selected in antiquity during domestication, there has been more recent deliberate
selection of alleles that affect growth characteristics. Genes for the 'dwarfing' trait, first used by Japanese
wheat breeders to produce short-stalked wheat, have had a huge effect on wheat yields world-wide, and
were major factors in the success of the Green Revolution in Mexico and Asia. Dwarfing genes enable the
carbon that is fixed in the plant during photosynthesis to be diverted towards seed production, and they
also help prevent the problem of lodging. 'Lodging' occurs when a ear stalk falls over in the wind and rots
on the ground, and heavy nitrogenous fertilization of wheat makes the grass grow taller and become more
susceptible to this problem. By 1997, 81% of the developing world's wheat acreage was planted to semi-
dwarf wheats, giving both increased yields and better response to nitrogenous fertilizer.

Wild grasses in the genus Triticum and related genera, and grasses such as rye have been a source of
many disease-resistance traits for cultivated wheat breeding since the 1930s.[19]

Heterosis, or hybrid vigor (as in the familiar F1 hybrids of maize), occurs in common (hexaploid) wheat,
but it is difficult to produce seed of hybrid cultivars on a commercial scale (as is done with maize)
because wheat flowers are complete and normally self-pollinate. Commercial hybrid wheat seed has been
produced using chemical hybridizing agents; these chemicals selectively interfere with pollen
development, or naturally occurring cytoplasmic male sterility systems. Hybrid wheat has been a limited
commercial success in Europe (particularly France), the USA and South Africa.[20] F1 hybrid wheat
cultivars should not be confused with the standard method of breeding inbred wheat cultivars by crossing
two lines using hand emasculation, then selfing or inbreeding the progeny many (ten or more) generations
before release selections are identified to be released as a variety or cultivar.

Synthetic hexaploids made by crossing the wild goatgrass wheat ancestor Aegilops tauschii and various
durum wheats are now being deployed, and these increase the genetic diversity of cultivated wheats.

Stomata (or leaf pores) are involved in both uptake of carbon dioxide gas from the atmosphere and water
vapor losses from the leaf due to water transpiration. Basic physiological investigation of these gas
exchange processes has yielded valuable carbon isotope based methods that are used for breeding wheat
varieties with improved water-use efficiency. These varieties can improve crop productivity in rain-fed
dry-land wheat farms.[21]

180
In 2010, a team of scientists announced they had decoded the wheat genome for the first time (95% of the
genome of a variety of wheat known as Chinese Spring line 42).[22] This announcement was widely
misreported as representing a finished genome sequence. In fact, sequence data was produced which
allows the identification of wheat genes, but the data was not assembled to represent the map of the
genome. Information on current wheat genome sequencing activities can be found at
http://www.wheatgenome.info/

[edit] Plant breeding

Main article: Physiological and molecular wheat breeding

Sheaved and stooked wheat

Wheat

Wheat

In traditional agricultural systems wheat populations often consist of landraces, informal farmer-
maintained populations that often maintain high levels of morphological diversity. Although landraces of
wheat are no longer grown in Europe and North America, they continue to be important elsewhere. The
origins of formal wheat breeding lie in the nineteenth century, when single line varieties were created
through selection of seed from a single plant noted to have desired properties. Modern wheat breeding

181
developed in the first years of the twentieth century and was closely linked to the development of
Mendelian genetics. The standard method of breeding inbred wheat cultivars is by crossing two lines
using hand emasculation, then selfing or inbreeding the progeny. Selections are identified (shown to have
the genes responsible for the varietal differences) ten or more generations before release as a variety or
cultivar.[23]

F1 hybrid wheat cultivars should not be confused with wheat cultivars deriving from standard plant
breeding. Heterosis or hybrid vigor (as in the familiar F1 hybrids of maize) occurs in common (hexaploid)
wheat, but it is difficult to produce seed of hybrid cultivars on a commercial scale as is done with maize
because wheat flowers are complete and normally self-pollinate.[23] Commercial hybrid wheat seed has
been produced using chemical hybridizing agents, plant growth regulators that selectively interfere with
pollen development, or naturally occurring cytoplasmic male sterility systems. Hybrid wheat has been a
limited commercial success in Europe (particularly France), the United States and South Africa.[24]

The major breeding objectives include high grain yield, good quality, disease and insect resistance and
tolerance to abiotic stresses include mineral, moisture and heat tolerance. The major diseases in temperate
environments include the following, arranged in a rough order of their significance from cooler to warmer
climates: eyespot, Stagonospora nodorum blotch (also known as glume blotch), yellow or stripe rust,
powdery mildew, Septoria tritici blotch (sometimes known as leaf blotch), brown or leaf rust, Fusarium
head blight, tan spot and stem rust. In tropical areas, spot blotch (also known as Helminthosporium leaf
blight) is also important.

[edit] Hulled versus free-threshing wheat

A mature wheat field in Israel

The four wild species of wheat, along with the domesticated varieties einkorn,[25] emmer[26] and spelt,[27]
have hulls. This more primitive morphology (in evolutionary terms) consists of toughened glumes that
tightly enclose the grains, and (in domesticated wheats) a semi-brittle rachis that breaks easily on
threshing. The result is that when threshed, the wheat ear breaks up into spikelets. To obtain the grain,
further processing, such as milling or pounding, is needed to remove the hulls or husks. In contrast, in
free-threshing (or naked) forms such as durum wheat and common wheat, the glumes are fragile and the
rachis tough. On threshing, the chaff breaks up, releasing the grains. Hulled wheats are often stored as
spikelets because the toughened glumes give good protection against pests of stored grain.[25]

[edit] Naming

For more details on this topic, see Taxonomy of wheat.

182
Sack of wheat

There are many botanical classification systems used for wheat species, discussed in a separate article on
Wheat taxonomy. The name of a wheat species from one information source may not be the name of a
wheat species in another.

Within a species, wheat cultivars are further classified by wheat breeders and farmers in terms of:

 growing season, such as winter wheat vs. spring wheat,[8] by gluten content, such as hard wheat
(high protein content) vs. soft wheat (high starch content), or by grain color (red, white or amber).
 Protein content. Bread wheat protein content ranges from 10% in some soft wheats with high
starch contents, to 15% in hard wheats.
 The quality of the wheat protein gluten. This protein can determine the suitability of a wheat to a
particular dish. A strong and elastic gluten present in bread wheats enables dough to trap carbon
dioxide during leavening, but elastic gluten interferes with the rolling of pasta into thin sheets.
The gluten protein in durum wheats used for pasta is strong but not elastic.
 Grain color (red, white or amber). Many wheat varieties are reddish-brown due to phenolic
compounds present in the bran layer which are transformed to pigments by browning enzymes.
White wheats have a lower content of phenolics and browning enzymes, and are generally less
astringent in taste than red wheats. The yellowish color of durum wheat and semolina flour made
from it is due to a carotenoid pigment called lutein, which can be oxidized to a colorless form by
enzymes present in the grain.

[edit] Major cultivated species of wheat

 Common wheat or Bread wheat (T. aestivum) – A hexaploid species that is the most widely
cultivated in the world.
 Durum (T. durum) – The only tetraploid form of wheat widely used today, and the second most
widely cultivated wheat.
 Einkorn (T. monococcum) – A diploid species with wild and cultivated variants. Domesticated at
the same time as emmer wheat, but never reached the same importance.
 Emmer (T. dicoccum) – A tetraploid species, cultivated in ancient times but no longer in
widespread use.
 Spelt (T. spelta) – Another hexaploid species cultivated in limited quantities.

Classes used in the United States are

 Durum – Very hard, translucent, light-colored grain used to make semolina flour for pasta.

183
 Hard Red Spring – Hard, brownish, high-protein wheat used for bread and hard baked goods.
Bread Flour and high-gluten flours are commonly made from hard red spring wheat. It is
primarily traded at the Minneapolis Grain Exchange.
 Hard Red Winter – Hard, brownish, mellow high-protein wheat used for bread, hard baked
goods and as an adjunct in other flours to increase protein in pastry flour for pie crusts. Some
brands of unbleached all-purpose flours are commonly made from hard red winter wheat alone. It
is primarily traded by the Kansas City Board of Trade. One variety is known as "turkey red
wheat", and was brought to Kansas by Mennonite immigrants from Russia.[28]
 Soft Red Winter – Soft, low-protein wheat used for cakes, pie crusts, biscuits, and muffins. Cake
flour, pastry flour, and some self-rising flours with baking powder and salt added, for example,
are made from soft red winter wheat. It is primarily traded by the Chicago Board of Trade.
 Hard White – Hard, light-colored, opaque, chalky, medium-protein wheat planted in dry,
temperate areas. Used for bread and brewing.
 Soft White – Soft, light-colored, very low protein wheat grown in temperate moist areas. Used
for pie crusts and pastry. Pastry flour, for example, is sometimes made from soft white winter
wheat.

Red wheats may need bleaching; therefore, white wheats usually command higher prices than red wheats
on the commodities market.

[edit] As a food

Wheat is used in a wide variety of foods.

Wheat germ crude (not whole grain)

Nutritional value per 100 g (3.5 oz)

184
Energy 1,506 kJ (360 kcal)

Carbohydrates 51.8 g

Dietary fiber 13.2 g

Fat 9.72 g

Protein 23.15 g

Thiamine (Vit. B1) 1.882 mg (145%)

Riboflavin (Vit. B2) 0.499 mg (33%)

Niacin (Vit. B3) 6.813 mg (45%)

Pantothenic acid (B5) 0.05 mg (1%)

Vitamin B6 1.3 mg (100%)

Folate (Vit. B9) 281 μg (70%)

Calcium 39 mg (4%)

Iron 6.26 mg (50%)

Magnesium 239 mg (65%)

Phosphorus 842 mg (120%)

Potassium 892 mg (19%)

Zinc 12.29 mg (123%)

185
Manganese 13.301 mg

Percentages are relative to US recommendations for


adults.
Source: USDA Nutrient database

Raw wheat can be ground into flour or - using hard durum wheat only, can be ground into semolina;
germinated and dried creating malt; crushed or cut into cracked wheat; parboiled (or steamed), dried,
crushed and de-branned into bulgur also known as groats. If the raw wheat is broken into parts at the mill,
as is usually done, the outer husk or bran can be used several ways. Wheat is a major ingredient in such
foods as bread, porridge, crackers, biscuits, Muesli, pancakes, pies, pastries, cakes, cookies, muffins,
rolls, doughnuts, gravy, boza (a fermented beverage), and breakfast cereals (e.g., Wheatena, Cream of
Wheat, Shredded Wheat, and Wheaties).

[edit] Nutrition

100 grams of hard red winter wheat[clarification needed] contain about 12.6 grams of protein, 1.5 grams of total
fat, 71 grams of carbohydrate (by difference), 12.2 grams of dietary fiber, and 3.2 mg of iron (17% of the
daily requirement); the same weight of hard red spring wheat contains about 15.4 grams of protein,
1.9 grams of total fat, 68 grams of carbohydrate (by difference), 12.2 grams of dietary fiber, and 3.6 mg
of iron (20% of the daily requirement).[29]

Much of the carbohydrate fraction of wheat is starch. Wheat starch is an important commercial product of
wheat, but second in economic value to wheat gluten.[30] The principal parts of wheat flour are gluten and
starch. These can be separated in a kind of home experiment, by mixing flour and water to form a small
ball of dough, and kneading it gently while rinsing it in a bowl of water. The starch falls out of the dough
and sinks to the bottom of the bowl, leaving behind a ball of gluten.

[edit] Health concerns

Main article: Gluten sensitivity

Roughly 1% of Indian populations[31][32] has coeliac (also written as celiac) disease—a condition that is
caused by an adverse immune system reaction to gliadin, a gluten protein found in wheat (and similar
proteins of the tribe Triticeae which includes other species such as barley and rye). Upon exposure to
gliadin, the enzyme tissue transglutaminase modifies the protein, and the immune system cross-reacts
with the bowel tissue, causing an inflammatory reaction. That leads to flattening of the lining of the small
intestine, which interferes with the absorption of nutrients. The only effective treatment is a lifelong
gluten-free diet.

The estimate for people in the United States is between 0.5 and 1.0 percent of the population.[33][34][35]

While the disease is caused by a reaction to wheat proteins, it is not the same as wheat allergy.

[edit] Synopsis of major staple food

186
Synopsis[36] of staple food ~composition: Amaranth[37] Wheat[38] Rice[39] Sweetcorn[40] Potato[41]
Component (per 100g portion) Amount Amount Amount Amount Amount
water (g) 11 11 12 76 82
energy (kJ) 1554 1506 1527 360 288
protein (g) 14 23 7 3 1.7
fat (g) 7 10 1 1 0.1
carbohydrates (g) 65 52 79 19 16
fiber (g) 7 13 1 3 2.4
sugars (g) 1.7 0.1 >0.1 3 1.2
iron (mg) 7.6 6.3 0.8 0.5 0.5
manganese (mg) 3.4 13.3 1.1 0.2 0.1
calcium (mg) 159 39 28 2 9
magnesium (mg) 248 239 25 37 21
phosphorus (mg) 557 842 115 89 62
potassium (mg) 508 892 115 270 407
zinc (mg) 2.9 12.3 1.1 0.5 0.3
panthothenic acid (mg) 1.5 0.1 1.0 0.7 0.3
vitB6 (mg) 0.6 1.3 0.2 0.1 0.2
folate (µg) 82 281 8 42 18
thiamin (mg) 0.1 1.9 0.1 0.2 0.1
riboflavin (mg) 0.2 0.5 >0.1 0.1 >0.1
niacin (mg) 0.9 6.8 1.6 1.8 1.1

[edit] Commercial use

Wheat output in 2005

Harvested wheat grain that enters trade is classified according to grain properties for the purposes of the
commodities market. Wheat buyers use these to decide which wheat to buy, as each class has special uses,
and producers use them to decide which classes of wheat will be most profitable to cultivate.

Wheat is widely cultivated as a cash crop because it produces a good yield per unit area, grows well in a
temperate climate even with a moderately short growing season, and yields a versatile, high-quality flour
that is widely used in baking. Most breads are made with wheat flour, including many breads named for
the other grains they contain like most rye and oat breads. The popularity of foods made from wheat flour
creates a large demand for the grain, even in economies with significant food surpluses.

187
Utensil made of dry wheat branches for loaves of bread

In recent years, low international wheat prices have often encouraged farmers in the USA to change to
more profitable crops. In 1998, the price at harvest was $2.68 per bushel. A USDA report[42] revealed that
in 1998, average operating costs were $1.43 per bushel and total costs were $3.97 per bushel. In that
study, farm wheat yields averaged 41.7 bushels per acre (2.2435 metric ton / hectare), and typical total
wheat production value was $31,900 per farm, with total farm production value (including other crops) of
$173,681 per farm, plus $17,402 in government payments. There were significant profitability differences
between low- and high-cost farms, mainly due to crop yield differences, location, and farm size.

In 2007 there was a dramatic rise in the price of wheat due to freezes and flooding in the northern
hemisphere and a drought in Australia. Wheat futures in September, 2007 for December and March
delivery had risen above $9.00 a bushel, prices never seen before.[43] There were complaints in Italy about
the high price of pasta.[44] This followed a wider trend of escalating food prices around the globe, driven
in part by climatic conditions such as drought in Australia, the diversion of arable land to other uses (such
as producing government-subsidised bio-oil crops), and later by some food-producing nations placing
bans or restrictions on exports in order to satisfy their own consumers.

Other drivers affecting wheat prices include the movement to bio fuels (in 2008, a third of corn crops in
the US are expected to be devoted to ethanol production)[citation needed] and rising incomes in developing
countries, which is causing a shift in eating patterns from predominantly rice to more meat based diets (a
rise in meat production equals a rise in grain consumption—seven kilograms of grain is required to
produce one kilogram of beef).[45]

[edit] Production and consumption

Worldwide wheat production


Main article: International wheat production statistics

188
In 2003, global per capita wheat consumption was 67 kg, with the highest per capita consumption
(239 kg) found in Kyrgyzstan.[46] In 1997, global wheat consumption was 101 kg per capita, with the
highest consumption (623 kg per capita) in Denmark, but most of this (81%) was for animal feed.[47]
Wheat is the primary food staple in North Africa and the Middle East, and is growing in popularity in
Asia. Unlike rice, wheat production is more widespread globally though China's share is almost one-sixth
of the world.

In the 20th century, global wheat output expanded by about 5-fold, but until about 1955 most of this
reflected increases in wheat crop area, with lesser (about 20%) increases in crop yields per unit area. After
1955 however, there was a dramatic ten-fold increase in the rate of wheat yield improvement per year,
and this became the major factor allowing global wheat production to increase. Thus technological
innovation and scientific crop management with synthetic nitrogen fertilizer, irrigation and wheat
breeding were the main drivers of wheat output growth in the second half of the century. There were
some significant decreases in wheat crop area, for instance in North America.[48]

Better seed storage and germination ability (and hence a smaller requirement to retain harvested crop for
next year's seed) is another 20th century technological innovation. In Medieval England, farmers saved
one-quarter of their wheat harvest as seed for the next crop, leaving only three-quarters for food and feed
consumption. By 1999, the global average seed use of wheat was about 6% of output.

Several factors are currently slowing the rate of global expansion of wheat production: population growth
rates are falling while wheat yields continue to rise, and the better economic profitability of other crops
such as soybeans and maize, linked with investment in modern genetic technologies, has promoted shifts
to other crops.

[edit] Farming systems

In the Punjab, India, and North China, irrigation has been a major contributor to increased grain output.
More widely over the last 40 years, a massive increase in fertilizer use together with the increased
availability of semi-dwarf varieties in developing countries, has greatly increased yields per hectare. In
developing countries, use of (mainly nitrogenous) fertilizer increased 25-fold in this period. However,
farming systems rely on much more than fertilizer and breeding to improve productivity. A good
illustration of this is Australian wheat growing in the southern winter cropping zone, where, despite low
rainfall (300 mm), wheat cropping is successful even with relatively little use of nitrogenous fertilizer.
This is achieved by 'rotation cropping' (traditionally called the ley system) with leguminous pastures and,
in the last decade, including a canola crop in the rotations has boosted wheat yields by a further 25% .[49]
In these low rainfall areas, better use of available soil-water (and better control of soil erosion) is achieved
by retaining the stubble after harvesting and by minimizing tillage.[50]

[edit] Futures contracts

Wheat futures are traded on the Chicago Board of Trade, Kansas City Board of Trade, and Minneapolis
Grain Exchange, and have delivery dates in March (H), May (K), July (N), September (U), and December
(Z).[51]

Top Ten Wheat Producers —


2008 (million metric ton)
China 112
India 79

189
United States 68
Russia 64
France 39
Canada 29
Germany 26
Ukraine 26
Australia 21
Pakistan 21
World Total 690
Source: UN Food & Agriculture
Organisation (FAO)[52]

[edit] Geographical variation

There are substantial differences in wheat farming, trading, policy, sector growth, and wheat uses in
different regions of the world. In the EU and Canada for instance, there is significant addition of wheat to
animal feeds, but less so in the USA.

The two biggest wheat producers are China and the EU, followed currently by India, then USA.
Developed countries USA, Canada, Australia, the EU and increasingly Argentina are the major exporters
with developing countries being the main importers, although both India and China are close to being
self-sufficient in wheat. In the rapidly developing countries of Asia, Westernization of diets associated
with increasing prosperity is leading to growth in per capita demand for wheat at the expense of the other
food staples.

In the past, there has been significant governmental intervention in wheat markets, such as price supports
in the USA and farm payments in the EU. In the EU these subsidies have encouraged heavy use of
fertilizers inputs with resulting high crop yields. In Australia and Argentina direct government subsidies
are much lower.[53]

[edit] Agronomy

Wheat spikelet with the three anthers sticking out

190
[edit] Crop development

Wheat normally needs between 110 and 130 days between planting and harvest, depending upon climate,
seed type, and soil conditions (winter wheat lies dormant during a winter freeze). Optimal crop
management requires that the farmer have a detailed understanding of each stage of development in the
growing plants. In particular, spring fertilizers, herbicides, fungicides, growth regulators are typically
applied only at specific stages of plant development. For example, it is currently recommended that the
second application of nitrogen is best done when the ear (not visible at this stage) is about 1 cm in size
(Z31 on Zadoks scale). Knowledge of stages is also important to identify periods of higher risk from the
climate. For example, pollen formation from the mother cell, and the stages between anthesis and
maturity are susceptible to high temperatures, and this adverse effect is made worse by water stress. [54]
Farmers also benefit from knowing when the 'flag leaf' (last leaf) appears, as this leaf represents about
75% of photosynthesis reactions during the grain filling period, and so should be preserved from disease
or insect attacks to ensure a good yield.

Several systems exist to identify crop stages, with the Feekes and Zadoks scales being the most widely
used. Each scale is a standard system which describes successive stages reached by the crop during the
agricultural season.

Wheat at the anthesis stage. Face view (left) and side view (right)

[edit] Diseases

Main articles: Wheat diseases and List of wheat diseases

There are many wheat diseases, mainly caused by fungi, bacteria, and viruses.[55] Plant breeding to
develop new disease-resistant varieties, and sound crop management practices are important for
preventing disease. Fungicides, used to prevent the significant crop losses from fungal disease, can be a
significant variable cost in wheat production. Estimates of the amount of wheat production lost owing to
plant diseases vary between 10–25% in Missouri.[56] A wide range of organisms infect wheat, of which
the most important are viruses and fungi.

The main wheat-disease categories are:

 Seed-borne diseases: these include seed-borne scab, seed-borne Stagonospora (previously known
as Septoria), common bunt (stinking smut), and loose smut. These are managed with fungicides.
 Leaf- and head- blight diseases: Powdery mildew, leaf rust, Septoria tritici leaf blotch,
Stagonospora (Septoria) nodorum leaf and glume blotch, and Fusarium head scab.
 Crown and root rot diseases: Two of the more important of these are 'take-all' and
Cephalosporium stripe. Both of these diseases are soil borne.

191
 Viral diseases: Wheat spindle streak mosaic (yellow mosaic) and barley yellow dwarf are the two
most common viral diseases. Control can be achieved by using resistant varieties.

[edit] Pests

Wheat is used as a food plant by the larvae of some Lepidoptera (butterfly and moth) species including
The Flame, Rustic Shoulder-knot, Setaceous Hebrew Character and Turnip Moth. Early in the season,
birds and rodents can also cause significant damage to a crop by digging up and eating newly planted
seeds or young plants. They can also damage the crop late in the season by eating the grain from the
mature spike. Recent post-harvest losses in cereals amount to billions of dollars per year in the USA
alone, and damage to wheat by various borers, beetles and weevils is no exception.[57] Rodents can also
cause major losses during storage, and in major grain growing regions, field mice numbers can sometimes
build up explosively to plague proportions because of the ready availability of food.[58] To reduce the
amount of wheat lost to post-harvest pests, Agricultural Research Service scientists have developed an
“insect-o-graph,” which can detect insects in wheat that are not visible to the naked eye. The device uses
electrical signals to detect the insects as the wheat is being milled. The new technology is so precise that it
can detect 5-10 infested seeds out of 300,000 good ones.[59] Tracking insect infestations in stored grain is
critical for food safety as well as for the marketing value of the crop.

Rice

From Wikipedia, the free encyclopedia

Jump to: navigation, search

For other uses, see Rice (disambiguation).

American long-grain rice plants

Rice, white, long-grain vegetable, raw

192
Nutritional value per 100 g (3.5 oz)

Energy 1,527 kJ (365 kcal)

Carbohydrates 80 g

Sugars 0.12 g

Dietary fiber 1.3 g

Fat 0.66 g

Protein 7.13 g

Water 11.62 g

Thiamine (Vit. B1) 0.0701 mg (5%)

Riboflavin (Vit. B2) 0.0149 mg (1%)

Niacin (Vit. B3) 1.62 mg (11%)

Pantothenic acid (B5) 1.014 mg (20%)

Vitamin B6 0.164 mg (13%)

Calcium 28 mg (3%)

Iron 0.80 mg (6%)

Magnesium 25 mg (7%)

Manganese 1.088 mg (54%)

193
Phosphorus 115 mg (16%)

Potassium 115 mg (2%)

Zinc 1.09 mg (11%)

Percentages are relative to US recommendations for


adults.
Source: USDA Nutrient database

Oryza sativa

Rice stem cross section magnified 400 times

194
A: Rice with chaff
B: Brown rice
C:Rice with germ
D: White rice with bran residue
E:Musenmai (Japanese:無洗米), "Polished and ready to boil rice", literally, non-wash rice
(1):Chaff
(2):Bran
(3):Bran residue
(4):Cereal germ
(5):Endosperm

Rice is the seed of the monocot plants Oryza sativa or Oryza glaberrima. As a cereal grain, it is the most
important staple food for a large part of the world's human population, especially in East and South Asia,
the Middle East, Latin America, and the West Indies. It is the grain with the second-highest worldwide
production, after maize (corn).[1]

Since a large portion of maize crops are grown for purposes other than human consumption, rice is the
most important grain with regard to human nutrition and caloric intake, providing more than one fifth of
the calories consumed worldwide by the human species.[2]

A traditional food plant in Africa, its cultivation declined in colonial times, but its production has the
potential to improve nutrition, boost food security, foster rural development and support sustainable
landcare.[citation needed] It helped Africa conquer its famine of 1203.[3]

Rice is normally grown as an annual plant, although in tropical areas it can survive as a perennial and can
produce a ratoon crop for up to 30 years.[4] The rice plant can grow to 1–1.8 m (3.3–5.9 ft) tall,
occasionally more depending on the variety and soil fertility. It has long, slender leaves 50–100 cm (20–
39 in) long and 2–2.5 cm (0.79–0.98 in) broad. The small wind-pollinated flowers are produced in a
branched arching to pendulous inflorescence 30–50 cm (12–20 in) long. The edible seed is a grain
(caryopsis) 5–12 mm (0.20–0.47 in) long and 2–3 mm (0.079–0.12 in) thick.

Rice cultivation is well-suited to countries and regions with low labor costs and high rainfall, as it is
labor-intensive to cultivate and requires ample water. Rice can be grown practically anywhere, even on a
steep hill or mountain. Although its parent species are native to South Asia and certain parts of Africa,
centuries of trade and exportation have made it commonplace in many cultures worldwide.

The traditional method for cultivating rice is flooding the fields while, or after, setting the young
seedlings. This simple method requires sound planning and servicing of the water damming and

195
channeling, but reduces the growth of less robust weed and pest plants that have no submerged growth
state, and deters vermin. While flooding is not mandatory for the cultivation of rice, all other methods of
irrigation require higher effort in weed and pest control during growth periods and a different approach
for fertilizing the soil.

(The name wild rice is usually used for species of the grass genus Zizania, both wild and domesticated,
although the term may also be used for primitive or uncultivated varieties of Oryza.)

[edit] Preparation as food

Rice broker in 1820s Japan. "36 Views of Mount Fuji" Hokusai

Old-fashioned method of polishing rice in Japan."36 Views of Mount Fuji" Hokusai

196
Rice plantation in Java, Indonesia

Planting rice is a labour-intensive process.

The seeds of the rice plant are first milled using a rice huller to remove the chaff (the outer husks of the
grain). At this point in the process, the product is called brown rice. The milling may be continued,
removing the 'bran', i.e., the rest of the husk and the germ, thereby creating white rice. White rice, which
keeps longer, lacks some important nutrients; in a limited diet which does not supplement the rice, brown
rice helps to prevent the disease beriberi.

White rice may also be buffed with glucose or talc powder (often called polished rice, though this term
may also refer to white rice in general), parboiled, or processed into flour. White rice may also be
enriched by adding nutrients, especially those lost during the milling process. While the cheapest method
of enriching involves adding a powdered blend of nutrients that will easily wash off (in the United States,
rice which has been so treated requires a label warning against rinsing), more sophisticated methods apply
nutrients directly to the grain, coating the grain with a water insoluble substance which is resistant to
washing.

In some countries parboiled rice is popular. Parboiled rice is subjected to a steaming or parboiling process
while still a brown rice. This causes nutrients from the outer husk, especially thiamine, to move into the
grain itself. The parboil process causes a gelatinisation of the starch in the grains. The grains become less
brittle, and the color of the milled grain changes from white to yellow. The rice is then dried, and can then
be milled as usual or used as brown rice. Milled parboiled rice is nutritionally superior to standard milled
rice. Parboiled rice has an additional benefit in that it does not stick to the pan during cooking, as happens
when cooking regular white rice. This type of rice is eaten in parts of India and countries of West Africa
are also accustomed to consuming parboiled rice.

Despite the hypothetical health risks of talc (such as stomach cancer),[5] talc-coated rice remains the norm
in some countries due to its attractive shiny appearance, but it has been banned in some, and is no longer
widely used in others (such as the United States). Even where talc is not used, glucose, starch, or other
coatings may be used to improve the appearance of the grains.

Rice bran, called nuka in Japan, is a valuable commodity in Asia and is used for many daily needs. It is a
moist, oily inner layer which is heated to produce oil. It is also used as a pickling bed in making rice bran
pickles and Takuan.

Raw rice may be ground into flour for many uses, including making many kinds of beverages such as
amazake, horchata, rice milk, and sake. Rice flour does not contain gluten and is suitable for people on a
gluten-free diet. Rice may also be made into various types of noodles. Raw, wild, or brown rice may also

197
be consumed by raw-foodist or fruitarians if soaked and sprouted (usually 1 week to 30 days); see also
Gaba rice below.

Processed rice seeds must be boiled or steamed before eating. Cooked rice may be further fried in cooking
oil or butter, or beaten in a tub to make mochi.

Rice is a good source of protein and a staple food in many parts of the world, but it is not a complete
protein: it does not contain all of the essential amino acids in sufficient amounts for good health, and
should be combined with other sources of protein, such as nuts, seeds, beans, fish, or meat.[6]

Rice, like other cereal grains, can be puffed (or popped). This process takes advantage of the grains' water
content and typically involves heating grains in a special chamber. Further puffing is sometimes
accomplished by processing pre-puffed pellets in a low-pressure chamber. The ideal gas law means that
either lowering the local pressure or raising the water temperature results in an increase in volume prior to
water evaporation, resulting in a puffy texture. Bulk raw rice density is about 0.9 g/cm³. It decreases to
less than one-tenth that when puffed.

[edit] Cooking

There are many varieties of rice; for many purposes the main distinction is between long- and medium-
grain rice. The grains of long-grain rice (high amylose) tend to remain intact after cooking; medium-grain
rice (high amylopectin) becomes more sticky. Medium-grain rice is used for sweet dishes, for risotto in
Italy and many arrossos — as in arròs negre, etc. — in Spain.

Uncooked, polished, white long-grain rice grains

198
Chinese rice dish utilizing Basmati rice

Rice served along with Indian curry. Note the yellowish tinge in the rice. It is due to the common practice
of adding turmeric during cooking.

Unmilled to milled rice, from right to left, brown rice, rice with germ, white rice

Rice is cooked by boiling or steaming, and absorbs water during cooking. It can be cooked in just as
much water as it absorbs (the absorption method), or in a large quantity of water which is drained before
serving (the rapid-boil method).[7] Electric rice cookers, popular in Asia and Latin America, simplify the
process of cooking rice. Rice is often heated in oil[citation needed] before boiling, or oil is added to the water;
this is thought to make the cooked rice less sticky.

In Arab cuisine rice is an ingredient of many soups and dishes with fish, poultry, and other types of meat.
It is also used to stuff vegetables or is wrapped in grape leaves. When combined with milk, sugar and

199
honey, it is used to make desserts. In some regions, such as Tabaristan, bread is made using rice flour.
Medieval Islamic texts spoke of medical uses for the plant.[8]

Rice may also be made into rice porridge (also called congee, fawrclaab, okayu, jook, or rice gruel) by
adding more water than usual, so that the cooked rice is saturated with water to the point that it becomes
very soft, expanded, and fluffy. Rice porridge is commonly eaten as a breakfast food, and is also a
traditional food for the sick.

Rice may be soaked prior to cooking, which saves fuel, decreases cooking time, minimizes exposure to
high temperature and thus decreases the stickiness of the rice. For some varieties, soaking improves the
texture of the cooked rice by increasing expansion of the grains.

Instant rice differs from parboiled rice in that it is milled, fully cooked and then dried. There is also a
significant degradation in taste and texture.

A nutritionally superior method of preparing brown rice known as GABA Rice or GBR (Germinated
Brown Rice)[9] may be used. This involves soaking washed brown rice for 20 h in warm water (38°C or
100°F) prior to cooking it. This process stimulates germination, which activates various enzymes in the
rice. By this method, a result of research carried out for the United Nations International Year of Rice, it
is possible to obtain a more complete amino acid profile, including GABA.

Cooked rice can contain Bacillus cereus spores, which produce an emetic toxin when left at 4–60 °C (39–
140 °F) [6]. When storing cooked rice for use the next day, rapid cooling is advised to reduce the risk of
toxin production.

Rice flour and starch often are used in batters and breadings to increase crispiness.

[edit] Synopsis of major staple foods

Synopsis[10] of Staple food White


Amaranth[11] Wheat[12] Sweetcorn[14] Potato[15]
~composition: rice[13]

Component (per 100g portion) Amount Amount Amount Amount Amount

water (g) 11 11 12 76 82

energy (kJ) 1554 1506 1527 360 288

protein (g) 14 23 7 3 1.7

fat (g) 7 10 1 1 0.1

carbohydrates (g) 65 52 79 19 16

fiber (g) 7 13 1 3 2.4

sugars (g) 1.7 0.1 >0.1 3 1.2

200
iron (mg) 7.6 6.3 0.8 0.5 0.5

manganese (mg) 3.4 13.3 1.1 0.2 0.1

calcium (mg) 159 39 28 2 9

magnesium (mg) 248 239 25 37 21

phosphorus (mg) 557 842 115 89 62

potassium (mg) 508 892 115 270 407

zinc (mg) 2.9 12.3 1.1 0.5 0.3

panthothenic acid (mg) 1.5 0.1 1.0 0.7 0.3

vitB6 (mg) 0.6 1.3 0.2 0.1 0.2

folate (µg) 82 281 8 42 18

thiamin (mg) 0.1 1.9 0.1 0.2 0.1

riboflavin (mg) 0.2 0.5 >0.1 0.1 >0.1

niacin (mg) 0.9 6.8 1.6 1.8 1.1

[edit] Rice growing ecology

Rice can be grown in different environments, depending upon water availability.[16]

1. Lowland, rainfed, which is drought prone, favors medium depth; waterlogged, submergence,
and flood prone
2. Lowland, irrigated, grown in both the wet season and the dry season
3. Deep water or floating rice
4. Coastal Wetland
5. Upland rice, Upland rice is also known as 'Ghaiya rice', well known for its drought tolerance[17]

[edit] History of domestication & cultivation

See also: Oryza sativa#History of domestication and cultivation

[edit] Asia

201
Terraced rice paddy on a hill slope in Indonesia.

The average Asian rice farmer owns a few hectares : Banaue Rice Terraces, N. Luzon, Philippines

Rice field under monsoon clouds in Pegu Division, Burma

Rice was first domesticated in the region of the Yangtze River valley.[18][19] Morphological studies of rice
phytoliths from the Diaotonghuan archaeological site clearly show the transition from the collection of
wild rice to the cultivation of domesticated rice. The large number of wild rice phytoliths at the
Diaotonghuan level dating from 12,000-11,000 BP indicates that wild rice collection was part of the local
means of subsistence. Changes in the morphology of Diaotonghuan phytoliths dating from 10,000-8,000
BP show that rice had by this time been domesticated.[20] Soon afterwards the two major varieties of
indica and Japonica rice were being grown in Central China.[19] In the late 3rd millennium bc there was a

202
rapid expansion of rice cultivation into mainland Southeast Asia and westwards across India and
Pakistan.[19]

The earliest remains of cultivated rice in India have been found in the north and west and date from
around 2000 BC. Perennial wild rices still grow in Assam and Nepal. It seems to have appeared around
1400 BC in southern India after its domestication in the northern plains.[citation needed] It then spread to all the
fertile alluvial plains watered by rivers. Cultivation and cooking methods are thought to have spread to
the west rapidly and by medieval times, southern Europe saw the introduction of rice as a hearty grain.

Rice is first mentioned in the Yajur Veda (c. 1500-800 BC) and then is frequently referred to in Sanskrit
texts.[citation needed] In India, there is a saying that grains of rice should be like two brothers, close but not
stuck together.[citation needed] Rice is often directly associated with prosperity and fertility, therefore there is
the custom of throwing rice at weddings.[21]

Today, the majority of all rice produced comes from China, Korea, India, Pakistan, Indonesia,
Bangladesh, Vietnam, Thailand, Myanmar, Philippines, and Japan. Asian farmers still account for 92% of
the world's total rice production. Rice is grown in all parts of India, Northern and Central Pakistan.
Basmati rice cultivated in the northern plains of the Punjab region is famous all over the world for its
aroma and quality.

[edit] Companion plant

One of the earliest known examples of companion planting is the growing of rice with Azolla, aka
mosquito fern, which covers the top of a fresh rice paddy's water, blocking out any competing plants, as
well as fixing nitrogen from the atmosphere for the rice to use. The rice is planted when it is tall enough
to poke out above the azolla. This method has been used for at least a thousand years.

[edit] Africa

Main article: Oryza glaberrima

African rice has been cultivated for 3500 years. Between 1500 and 800 BC, Oryza glaberrima propagated
from its original centre, the Niger River delta, and extended to Senegal. However, it never developed far
from its original region. Its cultivation even declined in favour of the Asian species, possibly brought to
the African continent by Arabs coming from the east coast between the 6th and 11th centuries CE.

Rice crop in Madagascar

203
[edit] Middle East

According to Zohary and Hopf (2000, p. 91), O. sativa was introduced to the Middle East in Hellenistic
and Parthian times, and was familiar to both Greek and Roman writers. They report that a large sample of
rice grains was recovered from a grave at Susa in Iran (dated to the 1st century AD) at one end of the
ancient world, while at the same time rice was grown in the Po valley in Italy.

In Iraq rice was grown in some areas of southern Iraq. With the rise of Islam it moved north to Nisibin,
the southern shores of the Caspian Sea and then beyond the Muslim world into the valley of Volga. In
Israel, rice came to be grown in the Jordan Valley. Rice is also grown in Yemen.[22]

[edit] Europe

The Moors brought Asiatic rice to the Iberian Peninsula in the 10th century. Records indicate it was
grown in Valencia and Majorca. In Majorca, rice cultivation seems to have stopped after the Christian
conquest, although historians are not certain.[22]

Muslims also brought rice to Sicily, where it was an important crop[22] long before it is noted in the plain
of Pisa (1468) or in the Lombard plain (1475), where its cultivation was promoted by Ludovico Sforza,
Duke of Milan, and demonstrated in his model farms.[23]

After the 15th century, rice spread throughout Italy and then France, later propagating to all the continents
during the age of European exploration.

[edit] Caribbean and Latin America

Rice is not native to the Americas but was introduced to the Caribbean and South America by European
colonizers at an early date with Spanish colonizers introducing Asian rice to Mexico in the 1520s at
Veracruz and the Portuguese and their African slaves introducing it at about the same time to Colonial
Brazil.[24] Recent scholarship suggests that African slaves played an active role in the establishment of
rice in the New World and that African rice was an important crop from an early period.[25] Varieties of
rice and bean dishes that were a staple dish along the peoples of West Africa remained a staple among
their descendants subjected to slavery in the Spanish New World colonies Brazil and elsewhere in the
Americas.[3]

[edit] United States

204
South Carolina rice plantation (Mansfield Plantation, Georgetown.)

In 1694, rice arrived in South Carolina, probably originating from Madagascar.[24]

In the United States, colonial South Carolina and Georgia grew and amassed great wealth from the slave
labor obtained from the Senegambia area of West Africa and from coastal Sierra Leone. At the port of
Charleston, through which 40% of all American slave imports passed, slaves from this region of Africa
brought the highest prices, in recognition of their prior knowledge of rice culture, which was put to use on
the many rice plantations around Georgetown, Charleston, and Savannah. From the enslaved Africans,
plantation owners learned how to dyke the marshes and periodically flood the fields. At first the rice was
milled by hand with wooden paddles, then winnowed in sweetgrass baskets (the making of which was
another skill brought by slaves from Africa). The invention of the rice mill increased profitability of the
crop, and the addition of water power for the mills in 1787 by millwright Jonathan Lucas was another step
forward. Rice culture in the southeastern U.S. became less profitable with the loss of slave labor after the
American Civil War, and it finally died out just after the turn of the 20th century. Today, people can visit
the only remaining rice plantation in South Carolina that still has the original winnowing barn and rice
mill from the mid-19th century at the historic Mansfield Plantation in Georgetown, South Carolina. The
predominant strain of rice in the Carolinas was from Africa and was known as "Carolina Gold." The
cultivar has been preserved and there are current attempts to reintroduce it as a commercially grown
crop.[26]

In the southern United States, rice has been grown in southern Arkansas, Louisiana, and east Texas since
the mid-19th century. Many Cajun farmers grew rice in wet marshes and low lying prairies where they
could also farm crayfish when the fields were flooded.[27] In recent years rice production has risen in
North America, especially in the Mississippi River Delta areas in the states of Arkansas and Mississippi.

Rice cultivation began in California during the California Gold Rush, when an estimated 40,000 Chinese
laborers immigrated to the state and grew small amounts of the grain for their own consumption.
However, commercial production began only in 1912 in the town of Richvale in Butte County.[28] By
2006, California produced the second largest rice crop in the United States,[29] after Arkansas, with
production concentrated in six counties north of Sacramento.[30] Unlike the Mississippi Delta region,
California's production is dominated by short- and medium-grain japonica varieties, including cultivars
developed for the local climate such as Calrose, which makes up as much as 85% of the state's crop.[31]

References to wild rice in the Americas are to the unrelated Zizania palustris

More than 100 varieties of rice are commercially produced primarily in six states (Arkansas, Texas,
Louisiana, Mississippi, Missouri, and California) in the U.S.[32] According to estimates for the 2006 crop
year, rice production in the U.S. is valued at $1.88 billion, approximately half of which is expected to be
exported. The U.S. provides about 12% of world rice trade.[32] The majority of domestic utilization of
U.S. rice is direct food use (58%), while 16% is used in each of processed foods and beer. The remaining
10% is found in pet food.[32]

[edit] Australia

Rice was one of the earliest crops planted in Australia by British settlers, who had experience with rice
plantations in the Americas and the subcontinent.

205
Although attempts to grow rice in the well-watered north of Australia have been made for many years,
they have consistently failed because of inherent iron and manganese toxicities in the soils and
destruction by pests.

In the 1920s it was seen as a possible irrigation crop on soils within the Murray-Darling Basin that were
too heavy for the cultivation of fruit and too infertile for wheat.[33]

Because irrigation water, despite the extremely low runoff of temperate Australia, was (and remains) very
cheap, the growing of rice was taken up by agricultural groups over the following decades. Californian
varieties of rice were found suitable for the climate in the Riverina, and the first mill opened at Leeton in
1951.

Even before this Australia's rice production greatly exceeded local needs, [33] and rice exports to Japan
have become a major source of foreign currency. Above-average rainfall from the 1950s to the middle
1990s[34] encouraged the expansion of the Riverina rice industry, but its prodigious water use in a
practically waterless region began to attract the attention of environmental scientists. These became
severely concerned with declining flow in the Snowy River and the lower Murray River.

Although rice growing in Australia is highly profitable due to the cheapness of land, several recent years
of severe drought have led many to call for its elimination because of its effects on extremely fragile
aquatic ecosystems. The Australian rice industry is somewhat opportunistic, with the area planted varying
significantly from season to season depending on water allocations in the Murray and Murrumbidgee
irrigation regions.

[edit] Production and commerce

Worldwide rice production

Paddy rice output in 2005.

206
Production of rice by country — 2007 [edit] Production
(million metric ton)[35]
World production of rice[36][dead link] has risen steadily from about
200 million tonnes of paddy rice in 1960 to over 607.9 million
China 187
tonnes in 2004, 634.5 million tonnes in 2005, and 685 million
tonnes in 2008. In 2004, the top four producers were China
India 144 (26% of world production), India (20%), Indonesia (9%) and
Pakistan (5%).[citation needed]
Indonesia 57
[edit] Harvesting, drying and milling
Pakistan 43
Unmilled rice, known as paddy (Indonesia and Malaysia: padi;
Vietnam 35 Philippines, palay), is usually harvested when the grains have a
moisture content of around 25 percent. In most Asian countries,
where rice is almost entirely the product of smallholder
Thailand 32
agriculture, harvesting is carried out manually, although there is
a growing interest in mechanical harvesting. Harvesting can be
Myanmar 31 carried out by the farmers themselves, but is also frequently
done by seasonal labour groups. Harvesting is followed by
Philippines 16 threshing, either immediately or within a day or two. Again,
much threshing is still carried out by hand but there is an
Brazil 11 increasing use of mechanical threshers. Subsequently, paddy
needs to be dried to bring down the moisture content to no more
than 20 percent for milling. A familiar sight in several Asian
Japan 10
countries is paddy laid out to dry along roads. However, in most
countries the bulk of drying of marketed paddy takes place in
Source: mills, with village-level drying being used for paddy to be
Food and Agriculture Organization consumed by farm families. Mills either sun dry or use
mechanical driers or both. Drying has to be carried out quickly
to avoid the formation of moulds. Mills range from simple hullers, with a throughput of a couple of tons a
day, that simply remove the outer husk, to enormous operations that can process 4,000 tons a day and
produce highly polished rice. A good mill can achieve a paddy-to-rice conversion rate of up to 72 percent
but smaller, inefficient mills often struggle to achieve 60 percent. These smaller mills often do not buy
paddy and sell rice but only service farmers who want to mill their paddy for their own consumption.

[edit] Distribution

Because of the importance of rice to human nutrition and food security in Asia, the domestic rice markets
tend to be subject to considerable state involvement. While the private sector plays a leading role in most
countries, agencies such as BULOG in Indonesia, the NFA in the Philippines, VINAFOOD in Vietnam
and the Food Corporation of India are all heavily involved in purchasing of paddy from farmers or rice
from mills and in distributing rice to poorer people. BULOG and NFA monopolise rice imports into their
countries while VINAFOOD controls all exports from Vietnam. [37]

[edit] Trade

World trade figures are very different to those for production, as only about 5–6% of rice produced is
traded internationally. The largest three exporting countries are Thailand, Vietnam, and the United States.
Major importers usually include Bangladesh, the Philippines, Brazil and some African and Persian Gulf

207
countries. Although China and India are the two largest producers of rice in the world, both countries
consume the majority of the rice produced domestically, leaving little to be traded internationally.

[edit] Price

In late 2007 to May 2008, the price of grains rose greatly due to droughts in major producing countries
(particularly Australia), increased use of grains for animal feed and US subsidies for bio-fuel production.
Although there was no shortage of rice on world markets this general upward trend in grain prices led to
panic buying by consumers, government rice export bans (in particular, by Vietnam) and inflated import
orders by the Philippines marketing board, the National Food Authority. This caused significant rises in
rice prices. In late April 2008, prices hit 24 US cents a pound, twice the price of seven months earlier.[38]

On April 30, 2008, Thailand announced plans for the creation of the Organisation of Rice Exporting
Countries (OREC) with the intention that this should develop into a price-fixing cartel for rice.[39][40]

[edit] Worldwide consumption

Consumption of rice by country—2003/2004


(million metric ton)[41]

China 135

India 85.25

Indonesia 36.95

Pakistan 26.4

Brazil 24

Vietnam 18

Thailand 10

Myanmar 10

Philippines 9.7

Japan 8.7

Mexico 7.3

Pakistan 6.0

South Korea 5.0

208
Between 1961 and 2002, per capita consumption of rice Egypt 3.9
increased by 40%.
United States 3.9
Rice is the most important crop in Asia. In Cambodia, for
example, 90% of the total agricultural area is used for
Source:
rice production.[42]
United States Department of Agriculture
U.S. rice consumption has risen sharply over the past 25
years, fueled in part by commercial applications such as beer production.[43] Almost one in five adult
Americans now report eating at least half a serving of white or brown rice per day.[44]

[edit] Environmental impacts

In many countries where rice is the main cereal crop, rice cultivation is responsible for most of the
methane emissions.[45] Rice requires slightly more water to produce than other grains.[46]

As sea levels rise, rice will become more inclined to remain flooded for longer periods of time. The
longer it stays in water, it cuts the soil off from atmospheric oxygen and causes fermentation of organic
matter in the soil. During the wet season, rice cannot hold the carbon in anaerobic conditions. The
microbes in the soil convert the carbon into methane which is then released through the respiration of the
rice plant or through diffusion of water. Current contributions of methane from agriculture is ~15% of
anthropogenic greenhouse gases, as estimated by the IPCC. A further rise in sea level of 10-85
centimeters would then stimulate the release of more methane into the air by rice plants. Methane is
twenty times more potent a greenhouse gas than carbon dioxide.[47]

A 2010 study found that, as a result of rising temperatures and decreasing solar radiation during the later
years of the 20th century, the rice yield growth rate has decreased in many parts of Asia, compared to
what would have been observed had the temperature and solar radiation trends not occurred.[48][49] The
yield growth rate had fallen 10-20% at some locations. The study was based on records from 227 farms in
six important rice-producing countries like Thailand, Vietnam, India, China, Bangladesh and Pakistan.
The mechanism of this falling yield was not clear but might involve increased respiration during warm
nights, so expending energy without being able to photosynthesise.

[edit] Pests and diseases

Main article: List of rice diseases

Rice pests are any organisms or microbes with the potential to reduce the yield or value of the rice crop
(or of rice seeds).[50] (Jahn et al. 2007) Rice pests include weeds, pathogens, insects, rodents, and birds. A
variety of factors can contribute to pest outbreaks, including the overuse of pesticides and high rates of
nitrogen fertilizer application.[51] Weather conditions also contribute to pest outbreaks. For example, rice
gall midge and army worm outbreaks tend to follow periods of high rainfall early in the wet season, while
thrips outbreaks are associated with drought.[52]

Crop protection scientists are trying to develop rice pest management techniques which are sustainable. In
other words, to manage crop pests in such a manner that future crop production is not threatened. [53] At
present, rice pest management includes cultural techniques, pest-resistant rice varieties, and pesticides
(which include insecticide). Increasingly, there is evidence that farmers' pesticide applications are often
unnecessary, and even facilitate pest outbreaks.[54][55][56][57][58][59] By reducing the populations of natural
enemies of rice pests,[60] misuse of insecticides can actually lead to pest outbreaks (Cohen et al. 1994).

209
Botanicals, so-called "natural pesticides", are used by some farmers in an attempt to control rice pests.
Botanicals include extracts of leaves, or a mulch of the leaves themselves. Some upland rice farmers in
Cambodia spread chopped leaves of the bitter bush (Chromolaena odorata) over the surface of fields after
planting. This practice probably helps the soil retain moisture and thereby facilitates seed germination.
Farmers also claim the leaves are a natural fertilizer and helps suppress weed and insect infestations.[61]

Among rice cultivars there are differences in the responses to, and recovery from, pest damage. [62]
Therefore, particular cultivars are recommended for areas prone to certain pest problems. The genetically
based ability of a rice variety to withstand pest attacks is called resistance. [63] Three main types of plant
resistance to pests are recognized as nonpreference, antibiosis, and tolerance.[64] Nonpreference (or
antixenosis) describes host plants which insects prefer to avoid; antibiosis is where insect survival is
reduced after the ingestion of host tissue; and tolerance is the capacity of a plant to produce high yield or
retain high quality despite insect infestation.[65] Over time, the use of pest resistant rice varieties selects
for pests that are able to overcome these mechanisms of resistance. When a rice variety is no longer able
to resist pest infestations, resistance is said to have broken down. Rice varieties that can be widely grown
for many years in the presence of pests, and retain their ability to withstand the pests are said to have
durable resistance. Mutants of popular rice varieties are regularly screened by plant breeders to discover
new sources of durable resistance.[66]

Major rice pests include the brown planthopper,[67] the rice gall midge,[68] the rice bug,[69] the rice
leafroller,[70] rice weevils,[71] stemborer,[72] panicle rice mite, rats,[73] and the weed Echinochloa
crusgali.[74]

Major rice diseases include Rice ragged stunt, Sheath Blight and tungro.[75] Rice blast, caused by the
fungus Magnaporthe grisea, is the most significant disease affecting rice cultivation. There is also an
ascomycete fungus, Cochliobolus miyabeanus, that causes brown spot disease in rice.[76][77]

[edit] Parasitic weeds

Rice is parasitized by the weed eudicot Striga hermonthica.[78] This parasitic weed is a devastating pest on
the crop.

[edit] Cultivars

Main article: List of rice varieties

210
Rice seed collection from IRRI

While most rice is bred for crop quality and productivity, there are varieties selected for characteristics
such as texture, smell, and firmness. Cultivars exist that are adapted to deep flooding, and these are
generally called "floating rice".[79]

There are four major categories of rice worldwide: Indica, japonica, aromatic and glutinous. The different
varieties of rice are not considered interchangeable, either in food preparation or agriculture, so as a
result, each major variety is a completely separate market from other varieties. It is common for one
variety of rice to rise in price while another one drops in price.[80]

The largest collection of rice cultivars is at the International Rice Research Institute (IRRI) in the
Philippines, with over 100,000 rice accessions[81] held in the International Rice Genebank.[82] Rice
cultivars are often classified by their grain shapes and texture. For example, Thai Jasmine rice is long-
grain and relatively less sticky, as long-grain rice contains less amylopectin than short-grain cultivars.
Chinese restaurants usually serve long-grain as plain unseasoned steamed rice. Japanese mochi rice and
Chinese sticky rice are short-grain. Chinese people use sticky rice which is properly known as "glutinous
rice" (note: glutinous refer to the glue-like characteristic of rice; does not refer to "gluten") to make
zongzi. The Japanese table rice is a sticky, short-grain rice. Japanese sake rice is another kind as well.

Indian rice cultivars include long-grained and aromatic Basmati (ਬਬਬਬਬਬ) (grown in the North), long
and medium-grained Patna rice and short-grained Sona Masoori (also spelled Sona Masuri). In the state of
Tamil Nadu, the most prized cultivar is ponni which is primarily grown in the delta regions of Kaveri
River. Kaveri is also referred to as ponni in the South and the name reflects the geographic region where
it is grown. In the Western Indian state of Maharashtra, a short grain variety called Ambemohar is very
popular. This rice has a characteristic fragrance of Mango blossom.

211
Unpolished long-grain rice grains with bran

Polished Indian sona masuri rice grains

Aromatic rices have definite aromas and flavours; the most noted cultivars are Thai fragrant rice,
Basmati, Patna rice, Vietnamese fragrant rice, and a hybrid cultivar from America sold under the trade
name, Texmati. Both Basmati and Texmati have a mild popcorn-like aroma and flavour. In Indonesia
there are also red and black cultivars.

High-yield cultivars of rice suitable for cultivation in Africa and other dry ecosystems called the new rice
for Africa (NERICA) cultivars have been developed. It is hoped that their cultivation will improve food
security in West Africa.

Draft genomes for the two most common rice cultivars, indica and japonica, were published in April
2002. Rice was chosen as a model organism for the biology of grasses because of its relatively small
genome (~430 megabase pairs). Rice was the first crop with a complete genome sequence.[83]

On December 16, 2002, the UN General Assembly declared the year 2004 the International Year of Rice.
The declaration was sponsored by more than 40 countries.

[edit] Biotechnology

[edit] High-yielding varieties

Main article: High-yielding variety

212
The High Yielding Varieties are a group of crops created intentionally during the Green Revolution to
increase global food production. Rice, like corn and wheat, was genetically manipulated to increase its
yield. This project enabled labor markets in Asia to shift away from agriculture, and into industrial
sectors. The first "Rice Car", IR8 was produced in 1966 at the International Rice Research Institute which
is based in the Philippines at the University of the Philippines' Los Baños site. IR8 was created through a
cross between an Indonesian variety named "Peta" and a Chinese variety named "Dee Geo Woo Gen."[84]

Scientists have identified and cloned many genes involved in the gibberellin signaling pathway, including
GAI1 (Gibberellin Insensitive) and SLR1 (Slender Rice).[85] Disruption of gibberellin signaling can lead
to significantly reduced stem growth leading to a dwarf phenotype. Photosynthetic investment in the stem
is reduced dramatically as the shorter plants are inherently more stable mechanically. Assimilates become
redirected to grain production, amplifying in particular the effect of chemical fertilizers on commercial
yield. In the presence of nitrogen fertilizers, and intensive crop management, these varieties increase their
yield two to three times.

[edit] Future potential

As the UN Millennium Development project seeks to spread global economic development to Africa, the
"Green Revolution" is cited as the model for economic development. With the intent of replicating the
successful Asian boom in agronomic productivity, groups like the Earth Institute are doing research on
African agricultural systems, hoping to increase productivity. An important way this can happen is the
production of "New Rices for Africa" (NERICA). These rices, selected to tolerate the low input and harsh
growing conditions of African agriculture are produced by the African Rice Center, and billed as
technology "from Africa, for Africa". The NERICA have appeared in The New York Times (October 10,
2007) and International Herald Tribune (October 9, 2007), trumpeted as miracle crops that will
dramatically increase rice yield in Africa and enable an economic resurgence. Ongoing research in China
to develop perennial rice could result in enhanced sustainability and food security.

[edit] Golden rice

Main article: Golden rice

Rice kernels do not contain vitamin A, so people who obtain most of their calories from rice are at risk of
vitamin A deficiency. German and Swiss researchers have genetically engineered rice to produce beta-
carotene, the precursor to vitamin A, in the rice kernel. The beta-carotene turns the processed (white) rice
a "gold" color, hence the name "golden rice". The beta-carotene is converted to vitamin A in humans who
consume the rice.[86] Although some rice strains produce beta-carotene in the hull, no non-genetically
engineered strains have been found that produce beta-carotene in the kernel, despite the testing of
thousands of strains. Additional efforts are being made to improve the quantity and quality of other
nutrients in golden rice.[87]

[edit] Expression of human proteins

Ventria Bioscience has genetically modified rice to express lactoferrin, lysozyme, and human serum
albumin which are proteins usually found in breast milk. These proteins have antiviral, antibacterial, and
antifungal effects.[88]

Rice containing these added proteins can be used as a component in oral rehydration solutions which are
used to treat diarrheal diseases, thereby shortening their duration and reducing recurrence. Such
supplements may also help reverse anemia.[89]

213
[edit] Sayings

 In the state of Andhra Pradesh, India, there is a saying in Telugu, "Annam Parabrahma
Swaroopam" which means 'Rice (Food) is a form of God'.
 The expression for eating a meal in Burmese, "Htamin Sar" means to eat rice. It is similar in the
Thai "gin kow". Vietnamese use the phrase "ăn cơm" in the same way, as do Bengali people use
the phrase "bhat khaowa".
 Bengali people identify themselves as 'Bengali by rice and fish', alluding to their collective food
habit.
 A proverbial saying in Japan states: "The farmer spends eighty-eight efforts on rice from planting
to crop." This teaches the sense of mottainai and gratitude for the farmer and for rice itself.[90]
 There is a Sri Lankan saying, 'deyyange haal kawila', meaning 'having eaten God's rice'. This is
used to explain a crazy person or his actions in general with humour. The reasoning behind this is
that when the rice harvest is collected, a small fraction of the best part is dedicated to the gods
and that is sacred - if a person eats that, they will be afflicted with curses and lose mental
stability/act crazy.
 The Korean term for meals is "bap 밥" which means rice. It is equivalent to the Japanese word
"meshi めし" which also means rice. Both of which are also equivalent to the Chinese word "fan
飯" which also means rice.
 Hmong culture has a saying, "annokao bin biao", literally "grains of rice", which is a metaphor
for great effort or exertion.
 In the Philippines and Malaysia there is an expression "One grain of rice equals one bead of
sweat". This expression is meant to encourage appreciation of the high level of labour involved in
the production of the rice and of food in general and to discourage wasting it.
 In Costa Rica there is a popular expression "arroz con mango", which literally means, "rice with
mango", that it's used to denote an absurd or nonsensical situation. This, because in Costa Rican
cuisine rice is a main dish and mango, being a sweet fruit (and often used as a dessert) are never
supposed to mix or even mingle in any kind of dish.
 In Puerto rico there is a popular expression "Estás como el arroz blanco", which means "you are
like white rice". White rice is most popular way of cooking rice in Puerto Rico. So it means that
you are everywhere, popular, highly sought.

Finger millet

From Wikipedia, the free encyclopedia

Jump to: navigation, search

Finger millet

214
Finger Millet grains of mixed
color.

Scientific classification

Kingdom: Plantae

(unranked): Angiosperms

(unranked): Monocots

(unranked): Commelinids

Order: Poales

Family: Poaceae

Subfamily: Chloridoideae

Genus: Eleusine

Species: E. coracana

Binomial name

Eleusine coracana
Gaertn.

Finger millet ਬਬਬਬ (Kannada) (Eleusine coracana, Amharic ਬਬਬ "Dagusa" or ਬਬਬ tōkūsō), also
known as African millet or Ragi ರರರರ in Kannada), is an annual plant widely grown as a cereal in the
arid areas of Africa and Asia. Finger millet is originally native to the Ethiopian Highlands[1] and was

215
introduced into India approximately 4000 years ago.[citation needed] It is very adaptable to higher elevations
and is grown in the Himalaya up to 2,300 metres in elevation.

[edit] Cultivation

Finger millet is often intercropped with legumes such as peanuts (Arachis hypogea), cowpeas (Vigna
sinensis), and pigeon peas (Cajanus cajan), or other plants such as Niger seeds (Guizotia abyssinica).

Although statistics on individual millet species are confused, and are sometimes combined with sorghum,
it is estimated that finger millet is grown on approximately 38,000 km2.

Finger millet

[edit] Storage

Once harvested, the seeds keep extremely well and are seldom attacked by insects or moulds. The long
storage capacity makes finger millet an important crop in risk-avoidance strategies for poorer farming
communities.

[edit] Nutrition

Finger millet is especially valuable as it contains the amino acid methionine, which is lacking in the diets
of hundreds of millions of the poor who live on starchy staples such as cassava, plantain, polished rice, or
maize meal. Finger millet can be ground and cooked into cakes, puddings or porridge. The grain is made
into a fermented drink (or beer) in Nepal and in many parts of Africa. The straw from finger millet is used
as animal fodder. It is also used for as a flavoured drink in festivals

Nutritive value of Ragi per 100 g

Protein 7.3 g

216
Fat 1.3 g

Carbohydrate 72 g

Minerals 2.7 g

Calcium 344 mg

Fibre 3.6 g

Energy 328 kCal

[edit] Preparation as food

In Karnataka, Ragi flour is boiled in water and the resultant preparation, called Ragi Mudde is eaten with
Sambar.

In India, finger millet (locally called ragi) is mostly grown and consumed in Rajasthan[2], Karnataka,
Andhra Pradesh, Tamil Nadu, Maharashtra and Goa[3]. Ragi flour is made into flatbreads, including thick,
leavened dosa and thinner, unleavened roti. Ragi grain is malted and the grains are ground. This ground
flour is consumed mixed with milk, boiled water or yoghurt.

In Andhra Pradesh Ragi Sankati (Telugu), which are ragi balls are eaten in the morning with a chilli,
onions, sambar (lentil based stew)or meat curry and helps them sustain throughout the whole day.

In Karnataka, ragi flour is generally consumed in the form of ragi balls (ਬਬਬਬ ਬਬਬਬਬਬ ragi mudde
in Kannada). The mudde which is prepared by cooking the Ragi flour with water to achieve a dough like
consistency. Which is then rolled into 'balls' of desired size and consumed. Ghee with Huli, Saaru, sambar
or another chicken curry is generally served along with these balls.

217
Finger millet in its commonly consumed form as a porridge

In Maharashtra, bhakri (ਬਬਬਬਬ in Marathi; also called ਬਬਬਬਬ bhakri in Northern Karnataka), a type
of flat bread is prepared using finger millet (ragi) flour. Bhakri is called as ਬਬਬਬ ਬਬਬਬਬਬ (ragi rotti
in Kannada) in Karnataka. In Goa ragi is very popular and satva, pole (dosa), bhakri, ambil (a sour
porridge) are very common preparations.

In Nepal, a thick dough made of millet flour (ḍhĩḍo ਬਬਬਬਬ) is cooked and eaten with the hand.
Fermented millet is used to make a beer (jããḍ ਬਬਬਬ) and the mash is distilled to make a liquor (rakśi
ਬਬਬਬਬ).

In the northwest of Vietnam, finger millet is used as a medicine for women when they give birth. A
minority used finger millet flour to make alcohol (bacha alcohol is a good drink of the H'mong minority).

In southern parts of India, pediatricians recommend finger-millet-based food for infants of six months and
above because of its high nutritional content, especially Iron and calcium. Home made Ragi malt happens
to be one of the most popular infant food even to this day. In Tamil Nadu, ragi is considered to be the
holy food of Amman, otherwise knowns as "Goddess Kali". Every small or large festival of this goddess
is celebrated with, women making porridge in the temples and distributing it to the poor and needy.

In india, Ragi recipes are hundreds in number and even common food stuffs such as dosa, idly and laddu
are made out of ragi.

In Sri lanka, Finger millet is called Kurakkan and is made into:

Kurakkan roti: An earthy brown thick roti with coconut

Thallapa: A thick dough made of ragi by boiling it with water and some salt until like a dough ball, it is
then eaten with a very spicy meat cury and is usually swallowed in small balss than chewing.

Puttu: Puttu is a traditional breakfast of Kerala, usually made with Rice powder together with coconut
grating and steamed in a cylindrical steamer. The preparation is also made with Ragi power, which is
supposed to be more nutritive.

[edit] Uses

A traditional food plant in Africa, millet has the potential to improve nutrition, boost food security, foster
rural development and support sustainable landcare.[4]

[edit] Common names for finger millet

 Arabic: Tailabon
 Chinese: 穇子 (Traditional), 䅟子 (Simplified), cǎnzi (pinyin)
 Danish: Fingerhirse
 Dhivehi: ‫ބ‬ ް ‫ބ‬
ިި ‫ނ‬ ި Binbi
 English: Finger millet, African millet, ragi, koracan
 Ethiopia: Dagussa (Amharic/Sodo), tokuso (amharic), barankiya (Oromo)
 French: eleusine cultivee, coracan, koracan

218
 German: Fingerhirse
 India:
o Ragi ਬਬਬਬ (Kannada)
o Ragi ਬਬਬਬ (Telugu)
o Ragi in Hindi
o Kodra in Himachali ( Himachal Pradesh )
o Mandia (Oriya)
o Taidalu (in the Telangana region)
o Kezhvaragu (ਬਬਬਬਬਬਬਬ), kay.pai (ਬਬਬਬਬਬ), Aariyam (ਬਬਬਬਬਬ)(Tamil)
o Muthary ( Panjipul or kooravu (Malayalam)
o Mandua (in some parts of north India)
o Nachani ਬਬਬਬਬ / Ragee ਬਬਬਬ (Marathi & Gujarati)
o Nachani ਬਬਬਬਬ / Ragee ਬਬਬਬ (Rajasthani)
o Madua (Bihar, especially in Mithila region)
o Nasne/Nachne/Nathno ਬਬਬਬਬ/ਬਬਬਬਬ (Konkani)

 Japan: 四国稗 シコクビエ Shikokubie


 Kenya: Wimbi (Swahili), Kal (Dholuo), Ugimbi (Kikuyu and Meru)
 Korea: 수수 (Susu)
 Nepal: ਬਬਬਬ Kodo
 Nigeria: Tamba (Hausa)
 Rwanda: Uburo
 Sri Lanka: ਬਬਬਬਬਬਬਬ ਬਬਬਬਬਬਬਬ (Kurakkan)
 Sudan: Tailabon (Arabic), ceyut (Bari)
 Tanzania: (Swahili) Mbege, mwimbi, Wimbi, ulezi,
 Uganda: Bulo
 Vietnam: Hong mi, Chi ke
 Zambia: Kambale, lupoko, mawele, majolothi, amale, bule
 Zimbabwe: Rapoko, zviyo, njera, rukweza, mazhovole, uphoko, poho

Cotton

From Wikipedia, the free encyclopedia

Jump to: navigation, search

For other uses, see Cotton (disambiguation).

219
Cotton bolls ready for harvest

Picking cotton in Oklahoma, USA, in the 1890s

Cotton fibers viewed under a scanning electron microscope

220
Cotton is a soft, fluffy staple fiber that grows in a boll, or protective capsule, around the seeds of cotton
plants of the genus Gossypium. The plant is a shrub native to tropical and subtropical regions around the
world, including the Americas, Africa, India, and Pakistan. The fiber most often is spun into yarn or
thread and used to make a soft, breathable textile, which is the most widely used natural-fiber cloth in
clothing today. The English name derives from the Arabic (al) qutn ‫طن‬ ْ ُ‫ ق‬, which began to be used circa
[1]
1400. The botanical purpose of cotton fiber is to aid in seed dispersal.

[edit] History

Cotton plants as imagined and drawn by John Mandeville in the 14th century

According to the Foods and Nutrition Encyclopedia, the earliest cultivation of cotton discovered thus far
in the Americas occurred in Mexico, some 8,000 years ago.[citation needed] The indigenous species was
Gossypium hirsutum, which is today the most widely planted species of cotton in the world, constituting
about 89.9% of all production worldwide. The greatest diversity of wild cotton species is found in
Mexico, followed by Australia and Africa.[2]

Cotton was first cultivated in the Old World 7,000 years ago (5th–4th millennia BC), by the inhabitants of
the Indus Valley Civilization, which covered a huge swath of the northwestern part of the Indian
subcontinent, comprising today parts of eastern Pakistan and northwestern India.[3] The Indus cotton
industry was well developed and some methods used in cotton spinning and fabrication continued to be
used until the modern industrialization of India.[4] Well before the Common Era, the use of cotton textiles
had spread from India to the Mediterranean and beyond.[5]

Greeks and the Arabs were apparently ignorant about cotton until the Wars of Alexander the Great, as his
contemporary Megasthenes told Seleucus I Nicator of "there being trees on which wool grows" in
"Indica".

According to The Columbia Encyclopedia, Sixth Edition:[6]

Cotton has been spun, woven, and dyed since prehistoric times. It clothed the people of ancient India,
Egypt, and China. Hundreds of years before the Christian era, cotton textiles were woven in India with
matchless skill, and their use spread to the Mediterranean countries. In the first century, Arab traders

221
brought fine muslin and calico to Italy and Spain. The Moors introduced the cultivation of cotton into
Spain in the 9th century. Fustians and dimities were woven there and in the 14th century in Venice and
Milan, at first with a linen warp. Little cotton cloth was imported to England before the 15th century,
although small amounts were obtained chiefly for candlewicks. By the 17th century, the East India
Company was bringing rare fabrics from India. Native Americans skillfully spun and wove cotton into
fine garments and dyed tapestries. Cotton fabrics found in Peruvian tombs are said to belong to a pre-Inca
culture.

In Iran (Persia), the history of cotton dates back to the Achaemenid era (5th century BC); however, there
are few sources about the planting of cotton in pre-Islamic Iran. The planting of cotton was common in
Merv, Ray and Pars of Iran. In the poems of Persian poets, especially Ferdowsi's Shahname, there are
many references to cotton ("panbe" in Persian). Marco Polo (13th century) refers to the major products of
Persia, including cotton. John Chardin, a famous French traveler of 17th century, who had visited the
Safavid Persia, has approved the vast cotton farms of Persia.[7]

In Peru, cultivation of the indigenous cotton species Gossypium barbadense was the backbone of the
development of coastal cultures, such as the Norte Chico, Moche and Nazca. Cotton was grown upriver,
made into nets and traded with fishing villages along the coast for large supplies of fish. The Spanish who
came to Mexico and Peru in the early 16th century found the people growing cotton and wearing clothing
made of it.

During the late medieval period, cotton became known as an imported fiber in northern Europe, without
any knowledge of how it was derived, other than that it was a plant; noting its similarities to wool, people
in the region could only imagine that cotton must be produced by plant-borne sheep. John Mandeville,
writing in 1350, stated as fact the now-preposterous belief: "There grew there [India] a wonderful tree
which bore tiny lambs on the endes of its branches. These branches were so pliable that they bent down to
allow the lambs to feed when they are hungrie [sic]." (See Vegetable Lamb of Tartary.) This aspect is
retained in the name for cotton in many European languages, such as German Baumwolle, which
translates as "tree wool" (Baum means "tree"; Wolle means "wool"). By the end of the 16th century,
cotton was cultivated throughout the warmer regions in Asia and the Americas.

India's cotton-processing sector gradually declined during British expansion in India and the
establishment of colonial rule during the late 18th and early 19th centuries. This was largely due to
aggressive colonialist mercantile policies of the British East India Company, which made cotton
processing and manufacturing workshops in India uncompetitive. Indian markets were increasingly forced
to supply only raw cotton and were forced, by British-imposed law, to purchase manufactured textiles
from Britain.

[edit] Industrial revolution in Britain

The advent of the Industrial Revolution in Britain provided a great boost to cotton manufacture, as textiles
emerged as Britain's leading export. In 1738, Lewis Paul and John Wyatt, of Birmingham, England,
patented the roller spinning machine, and the flyer-and-bobbin system for drawing cotton to a more even
thickness using two sets of rollers that traveled at different speeds. Later, the invention of the spinning
jenny in 1764 and Richard Arkwright's spinning frame (based on the roller spinning machine) in 1769
enabled British weavers to produce cotton yarn and cloth at much higher rates. From the late 18th century
onwards, the British city of Manchester acquired the nickname "Cottonopolis" due to the cotton industry's
omnipresence within the city, and Manchester's role as the heart of the global cotton trade. Production
capacity in Britain and the United States was further improved by the invention of the cotton gin by the

222
American Eli Whitney in 1793. Improving technology and increasing control of world markets allowed
British traders to develop a commercial chain in which raw cotton fibers were (at first) purchased from
colonial plantations, processed into cotton cloth in the mills of Lancashire, and then exported on British
ships to captive colonial markets in West Africa, India, and China (via Shanghai and Hong Kong).

By the 1840s, India was no longer capable of supplying the vast quantities of cotton fibers needed by
mechanized British factories, while shipping bulky, low-price cotton from India to Britain was time-
consuming and expensive. This, coupled with the emergence of American cotton as a superior type (due
to the longer, stronger fibers of the two domesticated native American species, Gossypium hirsutum and
Gossypium barbadense), encouraged British traders to purchase cotton from plantations in the United
States and the Caribbean. By the mid 19th century, "King Cotton" had become the backbone of the
southern American economy. In the United States, cultivating and harvesting cotton became the leading
occupation of slaves.

During the American Civil War, American cotton exports slumped due to a Union blockade on Southern
ports, also because of a strategic decision by the Confederate government to cut exports, hoping to force
Britain to recognize the Confederacy or enter the war, prompting the main purchasers of cotton, Britain
and France to turn to Egyptian cotton. British and French traders invested heavily in cotton plantations
and the Egyptian government of Viceroy Isma'il took out substantial loans from European bankers and
stock exchanges. After the American Civil War ended in 1865, British and French traders abandoned
Egyptian cotton and returned to cheap American exports, sending Egypt into a deficit spiral that led to the
country declaring bankruptcy in 1876, a key factor behind Egypt's annexation by the British Empire in
1882.

Prisoners farming cotton under the trusty system in Parchman Farm, Mississippi - 1911

223
Picking cotton in Georgia, United States, in 1943

Cotton exhibit at the Louisiana State Exhibit Museum in Shreveport, Louisiana has been a major cotton
producer.

During this time, cotton cultivation in the British Empire, especially India, greatly increased to replace the
lost production of the American South. Through tariffs and other restrictions, the British government
discouraged the production of cotton cloth in India; rather, the raw fiber was sent to England for
processing. The Indian patriot Mahatma Gandhi described the process:

1. English people buy Indian cotton in the field, picked by Indian labor at seven cents a day, through
an optional monopoly.
2. This cotton is shipped on British ships, a three-week journey across the Indian Ocean, down the
Red Sea, across the Mediterranean, through Gibraltar, across the Bay of Biscay and the Atlantic
Ocean to London. One hundred per cent profit on this freight is regarded as small.
3. The cotton is turned into cloth in Lancashire. You pay shilling wages instead of Indian pennies to
your workers. The English worker not only has the advantage of better wages, but the steel
companies of England get the profit of building the factories and machines. Wages; profits; all
these are spent in England.

224
4. The finished product is sent back to India at European shipping rates, once again on British ships.
The captains, officers, sailors of these ships, whose wages must be paid, are English. The only
Indians who profit are a few lascars who do the dirty work on the boats for a few cents a day.
5. The cloth is finally sold back to the kings and landlords of India who got the money to buy this
expensive cloth out of the poor peasants of India who worked at seven cents a day. (Fisher 1932
pp 154–156)

In the United States, Southern cotton provided capital for the continuing development of the North. The
cotton produced by enslaved African Americans not only helped the South, but also enriched Northern
merchants. Much of the Southern cotton was transshipped through the northern ports.

Cotton remained a key crop in the Southern economy after emancipation and the end of the Civil War in
1865. Across the South, sharecropping evolved, in which free black farmers and landless white farmers
worked on white-owned cotton plantations of the wealthy in return for a share of the profits. Cotton
plantations required vast labor forces to hand-pick cotton, and it was not until the 1950s that reliable
harvesting machinery was introduced into the South (prior to this, cotton-harvesting machinery had been
too clumsy to pick cotton without shredding the fibers). During the early 20th century, employment in the
cotton industry fell, as machines began to replace laborers, and the South's rural labor force dwindled
during the First and Second World Wars. Today, cotton remains a major export of the southern United
States, and a majority of the world's annual cotton crop is of the long-staple American variety.[8]

[edit] Tangüis cotton

Main article: Fermín Tangüis

Fermín Tangüis poses with an example of the "Tangüis cotton"

In 1901, Peru's cotton industry suffered because of a fungus plague caused by a plant disease known as
"cotton wilt" or, more correctly, "fusarium wilt", caused by the fungus Fusarium vasinfectum.[9] The plant
disease, which spread throughout Peru, entered plant's roots and worked its way up the stem until the
plant was completely dried up. Fermín Tangüis, a Puerto Rican agriculturist who lived in Peru, studied
some species of the plant that were affected by the disease to a lesser extent and experimented in
germination with the seeds of various cotton plants. In 1911, after 10 years of experimenting and failures,
Tangüis was able to develop a seed which produced a superior cotton plant resistant to the disease. The
seeds produced a plant that had a 40% longer (between 29 mm and 33 mm) and thicker fiber that did not
break easily and required little water.[10] The Tangüis cotton, as it became known, is the variety which is
preferred by the Peruvian national textile industry. It constituted 75% of all the Peruvian cotton
production, both for domestic use and apparel exports. The Tangüis cotton crop was estimated at 225,000
bales that year.[11]

225
[edit] Cultivation

Cotton plowing in Togo, 1928

Harvested cotton in Tennessee (2006)

Cotton modules in Australia (2007)

Successful cultivation of cotton requires a long frost-free period, plenty of sunshine, and a moderate
rainfall, usually from 600 to 1200 mm (24 to 48 inches). Soils usually need to be fairly heavy, although

226
the level of nutrients does not need to be exceptional. In general, these conditions are met within the
seasonally dry tropics and subtropics in the Northern and Southern hemispheres, but a large proportion of
the cotton grown today is cultivated in areas with less rainfall that obtain the water from irrigation.
Production of the crop for a given year usually starts soon after harvesting the preceding autumn. Planting
time in spring in the Northern hemisphere varies from the beginning of February to the beginning of June.
The area of the United States known as the South Plains is the largest contiguous cotton-growing region
in the world. While dryland (non-irrigated) cotton is successfully grown in this region, consistent yields
are only produced with heavy reliance on irrigation water drawn from the Ogallala Aquifer. Since cotton
is somewhat salt and drought tolerant, this makes it an attractive crop for arid and semiarid regions. As
water resources get tighter around the world, economies that rely on it face difficulties and conflict, as
well as potential environmental problems.[12][13][14][15][16] For example, improper cropping and irrigation
practices have led to desertification in areas of Uzbekistan, where cotton is a major export. In the days of
the Soviet Union, the Aral Sea was tapped for agricultural irrigation, largely of cotton, and now salination
is widespread.[15][16]

[edit] Genetic modification

Genetically modified (GM) cotton was developed to reduce the heavy reliance on pesticides. The
bacterium Bacillus thuringiensis (Bt) naturally produces a chemical harmful only to a small fraction of
insects, most notably the larvae of moths and butterflies, beetles, and flies, and harmless to other forms of
life. The gene coding for Bt toxin has been inserted into cotton, causing cotton to produce this natural
insecticide in its tissues. In many regions, the main pests in commercial cotton are lepidopteran larvae,
which are killed by the Bt protein in the transgenic cotton they eat. This eliminates the need to use large
amounts of broad-spectrum insecticides to kill lepidopteran pests (some of which have developed
pyrethroid resistance). This spares natural insect predators in the farm ecology and further contributes to
noninsecticide pest management.

Bt cotton is ineffective against many cotton pests, however, such as plant bugs, stink bugs, and aphids;
depending on circumstances it may still be desirable to use insecticides against these. A 2006 study done
by Cornell researchers, the Center for Chinese Agricultural Policy and the Chinese Academy of Science
on Bt cotton farming in China found that after seven years these secondary pests that were normally
controlled by pesticide had increased, necessitating the use of pesticides at similar levels to non-Bt cotton
and causing less profit for farmers because of the extra expense of GM seeds. [17] However a more recent
2009 study by the Chinese Academy of Sciences, Stanford University and Rutgers University refutes
this.[18] They concluded that the GM cotton effectively controlled bollworm. The secondary pests were
mostly miridae (plant bugs) whose increase was related to local temperature and rainfall and only
continued to increase in half the villages studied. Moreover, the increase in insecticide use for the control
of these secondary insects was far smaller than the reduction in total insecticide use due to Bt cotton
adoption. The International Service for the Acquisition of Agri-biotech Applications (ISAAA) said that,
worldwide, GM cotton was planted on an area of 16 million hectares in 2009.[19] This was 49% of the
worldwide total area planted in cotton. The U.S. cotton crop was 93% GM in 2010[20] and the Chinese
cotton crop was 68% GM in 2009.[21]

The initial introduction of GM cotton proved to be a huge success in Australia - the yields were
equivalent to the no transgenic varieties and the crop used much less pesticide to produce (85%
reduction).[22] The subsequent introduction of a second variety of GM cotton led to increases in GM
cotton production until 95% of the Australian cotton crop was GM in 2009.[19]

Cotton has also been genetically modified for resistance to glyphosate (marketed as Roundup in North
America), an inexpensive and highly effective, but broad-spectrum herbicide. Originally, it was only

227
possible to achieve glyphosate resistance when the plant was young, but with the development of
Roundup Ready Flex, it is possible to achieve glyphosate resistance much later in the growing season.

GM cotton acreage in India continues to grow at a rapid rate, increasing from 50,000 hectares in 2002 to
8.4 million hectares in 2009. The total cotton area in India was 9.6 million hectares (the largest in the
world or, about 35% of world cotton area), so GM cotton was grown on 87% of the cotton area in
2009.[21] This makes India the country with the largest area of GM cotton in the world, surpassing China
(3.7 million hectares in 2009). The major reasons for this increase is a combination of increased farm
income ($225/ha) and a reduction in pesticide use to control the cotton bollworm.

Cotton has gossypol, a toxin that makes it inedible. However, scientists have silenced the gene that
produces the toxin, making it a potential food crop.[23]

[edit] Organic production

Organic cotton is generally understood as cotton, from plants not genetically modified, that is certified to
be grown without the use of any synthetic agricultural chemicals, such as fertilizers or pesticides.[24] Its
production also promotes and enhances biodiversity and biological cycles.[25] United States cotton
plantations are required to enforce the National Organic Program (NOP). This institution determines the
allowed practices for pest control, growing, fertilizing, and handling of organic crops. [26][27] As of 2007,
265,517 bales of organic cotton were produced in 24 countries, and worldwide production was growing at
a rate of more than 50% per year.[28]

[edit] Pests and weeds

Main article: List of cotton diseases

Hoeing a cotton field to remove weeds, Greene County, Georgia, USA, 1941

The cotton industry relies heavily on chemicals, such as fertilizers and insecticides, although a very small
number of farmers are moving toward an organic model of production, and organic cotton products are
now available for purchase at limited locations. These are popular for baby clothes and diapers. Under
most definitions, organic products do not use genetic engineering.

Historically, in North America, one of the most economically destructive pests in cotton production has
been the boll weevil. Due to the US Department of Agriculture's highly successful Boll Weevil
Eradication Program (BWEP), this pest has been eliminated from cotton in most of the United States.
This program, along with the introduction of genetically engineered Bt cotton (which contains a bacterial

228
gene that codes for a plant-produced protein that is toxic to a number of pests such as cotton bollworm
and pink bollworm), has allowed a reduction in the use of synthetic insecticides.

Other significant global pests of cotton include the pink bollworm, Pectinophora gossypiella; the chili
thrips, Scirtothrips dorsalis; and the cotton seed bug, Oxycarenus hyalinipennis.

[edit] Harvesting

Offloading freshly harvested cotton into a module builder in Texas; previously built modules can be seen
in the background

Cotton being picked by hand in India, 2005.

Most cotton in the United States, Europe, and Australia is harvested mechanically, either by a cotton
picker, a machine that removes the cotton from the boll without damaging the cotton plant, or by a cotton
stripper, which strips the entire boll off the plant. Cotton strippers are used in regions where it is too
windy to grow picker varieties of cotton, and usually after application of a chemical defoliant or the
natural defoliation that occurs after a freeze. Cotton is a perennial crop in the tropics, and without
defoliation or freezing, the plant will continue to grow.

Cotton continues to be picked by hand in developing countries.[29]

[edit] Competition from synthetic fibers

The era of manufactured fibers began with the development of rayon in France in the 1890s. Rayon is
derived from a natural cellulose and cannot be considered synthetic, but requires extensive processing in a
manufacturing process, and led the less expensive replacement of more naturally derived materials. A

229
succession of new synthetic fibers were introduced by the chemicals industry in the following decades.
Acetate in fiber form was developed in 1924. Nylon, the first fiber synthesized entirely from
petrochemicals, was introduced as a sewing thread by DuPont in 1936, followed by DuPont's acrylic in
1944. Some garments were created from fabrics based on these fibers, such as women's hosiery from
nylon, but it was not until the introduction of polyester into the fiber marketplace in the early 1950s that
the market for cotton came under threat.[30] The rapid uptake of polyester garments in the 1960s caused
economic hardship in cotton-exporting economies, especially in Central American countries, such as
Nicaragua, where cotton production had boomed tenfold between 1950 and 1965 with the advent of cheap
chemical pesticides. Cotton production recovered in the 1970s, but crashed to pre-1960 levels in the early
1990s.[31]

Beginning as a self-help program in the mid-1960s, the Cotton Research and Promotion Program (CRPP)
was organized by U.S. cotton producers in response to cotton's steady decline in market share. At that
time, producers voted to set up a per-bale assessment system to fund the program, with built-in safeguards
to protect their investments. With the passage of the Cotton Research and Promotion Act of 1966, the
program joined forces and began battling synthetic competitors and re-establishing markets for cotton.
Today, the success of this program has made cotton the best-selling fiber in the U.S. and one of the best-
selling fibers in the world.

Administered by the Cotton Board and conducted by Cotton Incorporated, the CRPP works to greatly
increase the demand for and profitability of cotton through various research and promotion activities. It is
funded by U.S. cotton producers and importers.

[edit] Uses

Cotton is used to make a number of textile products. These include terrycloth for highly absorbent bath
towels and robes; denim for blue jeans; chambray, popularly used in the manufacture of blue work shirts
(from which we get the term "blue-collar"); and corduroy, seersucker, and cotton twill. Socks, underwear,
and most T-shirts are made from cotton. Bed sheets often are made from cotton. Cotton also is used to
make yarn used in crochet and knitting. Fabric also can be made from recycled or recovered cotton that
otherwise would be thrown away during the spinning, weaving, or cutting process. While many fabrics
are made completely of cotton, some materials blend cotton with other fibers, including rayon and
synthetic fibers such as polyester. It can either be used in knitted or woven fabrics, as it can be blended
with elastine to make a stretchier thread for knitted fabrics, and apparel such as stretch jeans.

In addition to the textile industry, cotton is used in fishnets, coffee filters, tents, gunpowder (see
nitrocellulose), cotton paper, and in bookbinding. The first Chinese paper was made of cotton fiber. Fire
hoses were once made of cotton.

The cottonseed which remains after the cotton is ginned is used to produce cottonseed oil, which, after
refining, can be consumed by humans like any other vegetable oil. The cottonseed meal that is left
generally is fed to ruminant livestock; the gossypol remaining in the meal is toxic to monogastric animals.
Cottonseed hulls can be added to dairy cattle rations for roughage. During the American slavery period,
cotton root bark was used in folk remedies as an abortifacient, that is, to induce a miscarriage.[32]

Cotton linters are fine, silky fibers which adhere to the seeds of the cotton plant after ginning. These curly
fibers typically are less than 1/8 in (3 mm) long. The term also may apply to the longer textile fiber staple
lint as well as the shorter fuzzy fibers from some upland species. Linters are traditionally used in the
manufacture of paper and as a raw material in the manufacture of cellulose. In the UK, linters are referred
to as "cotton wool". This can also be a refined product (absorbent cotton in U.S. usage) which has

230
medical, cosmetic and many other practical uses. The first medical use of cotton wool was by Dr. Joseph
Sampson Gamgee at the Queen's Hospital (later the General Hospital) in Birmingham, England.

Shiny cotton is a processed version of the fiber that can be made into cloth resembling satin for shirts and
suits. However, it is hydrophobic (does not absorb water easily), which makes it unfit for use in bath and
dish towels (although examples of these made from shiny cotton are seen).

The term Egyptian cotton refers to the extra long staple cotton grown in Egypt and favored for the luxury
and upmarket brands worldwide. During the U.S. Civil War, with heavy European investments, Egyptian-
grown cotton became a major alternate source for British textile mills. Egyptian cotton is more durable
and softer than American Pima cotton, which is why it is more expensive. Pima cotton is American
cotton that is grown in the southwestern states of the U.S.

[edit] International trade

Worldwide cotton production

Cottonseed output in 2005

The largest producers of cotton, currently (2009), are China and India, with annual production of about 34
million bales and 24 million bales, respectively; most of this production is consumed by their respective

231
textile industries. The largest exporters of raw cotton are the United States, with sales of $4.9 billion, and
Africa, with sales of $2.1 billion. The total international trade is estimated to be $12 billion. Africa's share
of the cotton trade has doubled since 1980. Neither area has a significant domestic textile industry, textile
manufacturing having moved to developing nations in Eastern and South Asia such as India and China. In
Africa, cotton is grown by numerous small holders. Dunavant Enterprises, based in Memphis, Tennessee,
is the leading cotton broker in Africa, with hundreds of purchasing agents. It operates cotton gins in
Uganda, Mozambique, and Zambia. In Zambia, it often offers loans for seed and expenses to the 180,000
small farmers who grow cotton for it, as well as advice on farming methods. Cargill also purchases cotton
in Africa for export.

The 25,000 cotton growers in the United States are heavily subsidized at the rate of $2 billion per year.
The future of these subsidies is uncertain and has led to anticipatory expansion of cotton brokers'
operations in Africa. Dunavant expanded in Africa by buying out local operations. This is only possible in
former British colonies and Mozambique; former French colonies continue to maintain tight monopolies,
inherited from their former colonialist masters, on cotton purchases at low fixed prices.[33]

[edit] Leading producer countries

Top ten cotton producers — 2009


(480-pound bales)

People's Republic of China 32.0 million bales

India 23.5 million bales

United States 12.4 million bales

Pakistan 10.8 million bales

Brazil 5.5 million bales

Uzbekistan 4.4 million bales

Australia 1.8 million bales

Turkey 1.7 million bales

Turkmenistan 1.1 million bales

Syria 1.0 million bales

Source:[34]

The five leading exporters of cotton in 2009 are (1) the United States, (2) India, (3) Uzbekistan, (4)
Pakistan, and (5) Brazil. The largest nonproducing importers are Korea, Russia, Taiwan, Japan, and Hong
Kong.[34]

232
In India, the states of Maharashtra (26.63%), Gujarat (17.96%) and Andhra Pradesh (13.75%) and also
Madhya Pradesh are the leading cotton producing states,[35] these states have a predominantly tropical wet
and dry climate.

In Pakistan, cotton is grown predominantly in the provinces of Punjab and Sindh. The leading city in
cotton production is the Punjabi city of Faisalabad which is also leading in textiles within Pakistan. The
Punjab has a tropical wet and dry climate throughout the year therefore enhancing the growth of cotton.

In the United States, the state of Texas led in total production as of 2004,[36] while the state of California
had the highest yield per acre.[37]

[edit] Fair trade

Cotton is an enormously important commodity throughout the world. However, many farmers in
developing countries receive a low price for their produce, or find it difficult to compete with developed
countries.

This has led to an international dispute (see United States – Brazil cotton dispute):

On 27 September 2002, Brazil requested consultations with the US regarding prohibited and actionable
subsidies provided to US producers, users and/or exporters of upland cotton, as well as legislation,
regulations, statutory instruments and amendments thereto providing such subsidies (including export
credits), grants, and any other assistance to the US producers, users and exporters of upland cotton. [38] On
8 September 2004, the Panel Report recommended that the United States "withdraw" export credit
guarantees and payments to domestic users and exporters, and "take appropriate steps to remove the
adverse effects or withdraw" the mandatory price-contingent subsidy measures.[39]

In addition to concerns over subsidies, the cotton industries of some countries are criticized for employing
child labor and damaging workers' health by exposure to pesticides used in production. The
Environmental Justice Foundation has campaigned against the prevalent use of forced child and adult
labor in cotton production in Uzbekistan, the world's third largest cotton exporter.[40] The international
production and trade situation has led to "fair trade" cotton clothing and footwear, joining a rapidly
growing market for organic clothing, fair fashion or so-called "ethical fashion". The fair trade system was
initiated in 2005 with producers from Cameroon, Mali and Senegal.[41]

[edit] Trade

Cotton is bought and sold by investors and price speculators as a tradable commodity on 2 different stock
exchanges in the United States of America .

 Cotton futures contracts are traded on the New York Mercantile Exchange (NYMEX) under the
ticker symbol TT. They are delivered every year in March, May, July, October, and December.[42]
 Cotton #2 futures contracts are traded on the New York Board of Trade (NYBOT) under the
ticker symbol CT. They are delivered every year in March, May, July, October, and December.[43]

[edit] Critical temperatures

 Favorable travel temperature range: below 25°C (77°F)


 Optimum travel temperature: 21°C (70°F)

233
 Glow temperature: 205°C (401°F)
 Fire point: 210°C (410°F)
 Autoignition temperature: 407°C (765°F)
 Autoignition temperature (for oily cotton): 120°C (248°F)

Cotton dries out, becomes hard and brittle and loses all elasticity at temperatures above 25°C (77°F).
Extended exposure to light causes similar problems.

A temperature range of 25°C (77°F) to 35°C (95°F) is the optimal range for mold development. At
temperatures below 0°C (32°F), rotting of wet cotton stops. Damaged cotton is sometimes stored at these
temperatures to prevent further deterioration.[44]

[edit] British standard yarn measures

 1 thread = 55 inches (about 137 cm)


 1 skein or rap = 80 threads (120 yards or about 109 m)
 1 hank = 7 skeins (840 yards or about 768 m)
 1 spindle = 18 hanks (15,120 yards or about 13.826 km)

[edit] Fiber properties

Property Evaluation

Fairly uniform in width, 12-20 micrometers; length varies from


Shape 1 cm to 6 cm (½ to 2½ inches); typical length is 2.2 cm to 3.3 cm (⅞
to 1¼ inches).

Luster high

Tenacity (strength)
Dry 3.0-5.0 g/d
Wet 3.3-6.0 g/d

Resiliency low

Density 1.54-1.56 g/cm³

Moisture absorption
raw: conditioned 8.5%
saturation 15-25%
mercerized: conditioned 8.5-10.3%
saturation 15-27%+

Dimensional stability good

Resistance to
acids damage, weaken fibers

234
alkali resistant; no harmful effects
organic solvents high resistance to most
sunlight Prolonged exposure weakens fibers.
microorganisms Mildew and rot-producing bacteria damage fibers.
insects Silverfish damage fibers.

Thermal reactions
Decomposes after prolonged exposure to temperatures of 150˚C or
to heat
over.
to flame
Burns readily.

The chemical composition of cotton is as follows:

 cellulose 91.00%
 water 7.85%
 protoplasm, pectins 0.55%
 waxes, fatty substances 0.40%
 mineral salts 0.20%

[edit] Cotton genome

The section Cotton genome may be too technical for most readers to understand. Please
improve this section to make it understandable to non-experts, without removing the technical
details. (January 2011)

A public genome sequencing effort of cotton was initiated [45] in 2007 by a consortium of public
researchers. They agreed on a strategy to sequence the genome of cultivated, tetraploid cotton.
"Tetraploid" means that cultivated cotton actually has two separate genomes within its nucleus, referred
to as the A and D genomes. The sequencing consortium first agreed to sequence the D-genome relative of
cultivated cotton (G. raimondii, a wild Central American cotton species) because of its small size and
limited number of repetitive elements. It is nearly one-third the number of bases of tetraploid cotton (AD),
and each chromosome is only present once.[clarification needed] The A genome of G. arboreum would be
sequenced next. Its genome is roughly twice the size of G. raimondii's. Part of the difference in size
between the two genomes is the amplification of retrotransposons (GORGE). Once both diploid genomes
are assembled, then research could begin sequencing the actual genomes of cultivated cotton varieties.
This strategy is out of necessity; if one were to sequence the tetraploid genome without model diploid
genomes, the euchromatic DNA sequences of the AD genomes would co-assemble and the repetitive
elements of AD genomes would assembly independently into A and D sequences respectively. Then there
would be no way to untangle the mess of AD sequences without comparing them to their diploid
counterparts.

The public sector effort continues with the goal to create a high-quality, draft genome sequence from
reads generated by all sources. The public-sector effort has generated Sanger reads of BACs, fosmids, and
plasmids as well as 454 reads. These later types of reads will be instrumental in assembling an initial draft
of the D genome. In 2010, two companies (Monsanto and Illumina), completed enough Illumina
sequencing to cover the D genome of G. raimondii about 50x[clarification needed]. They announced that they

235
would donate their raw reads to the public. This public relations effort gave them some recognition for
sequencing the cotton genome. Once the D genome is assembled from all of this raw material, it will
undoubtedly assist in the assembly of the AD genomes of cultivated varieties of cotton, but a lot of hard
work remains.

Coffea arabica

From Wikipedia, the free encyclopedia

Jump to: navigation, search

Coffea arabica

Coffee flowers

Coffee fruits

Scientific classification

Kingdom: Plantae

(unranked): Angiosperms

236
(unranked): Eudicots

(unranked): Asterids

Order: Gentianales

Family: Rubiaceae

Genus: Coffea

Species: C. arabica

Binomial name

Coffea arabica
L.

Coffea arabica (pronounced /əˈræbɪkə/) is a species of Coffea originally indigenous to the mountains of
Yemen in the Arabian Peninsula, hence its name, and also from the southwestern highlands of Ethiopia
and southeastern Sudan. It is also known as the "coffee shrub of Arabia", "mountain coffee" or "arabica
coffee". Coffea arabica is believed to be the first species of coffee to be cultivated, being grown in
southwest Arabia for well over 1,000 years. It is said to produce better coffee than the other major
commercially grown coffee species, Coffea canephora (robusta), but tastes vary. C. arabica contains less
caffeine than any other commercially cultivated species of coffee. Wild plants grow to between 9 and 12
m tall, and have an open branching system; the leaves are opposite, simple elliptic-ovate to oblong, 6–
12 cm long and 4–8 cm broad, glossy dark green. The flowers are white, 10–15 mm in diameter and grow
in axillary clusters. The fruit is a drupe (though commonly called a "berry") 10–15 mm in diameter,
maturing bright red to purple and typically contain two seeds (the coffee 'bean').

[edit] Taxonomy

Coffea arabica was first described by Antoine de Jussieu, who named it Jasminum arabicum after
studying a specimen from the Botanic Gardens of Amsterdam. Linnaeus placed it in its own genus Coffea
in 1737.[1]

[edit] Distribution and habitat

237
Habitat, Olinda, Maui

Originally found in the southwestern highlands of Ethiopia, Coffea arabica is now rare there in its native
state, and many populations appear to be mixed native and planted trees. It is common there as an
understorey shrub. It has also been recovered from the Boma Plateau in southeastern Sudan. C. arabica is
also found on Mount Marsabit in northern Kenya, but it is unclear whether this is a truly native or
naturalised occurrence.[2] Yemen is also believed to have native C. arabica growing in fields.

[edit] Cultivation

C. arabica takes about seven years to mature fully, and does best with 1.0-1.5 meters (about 40-
59 inches) of rain, evenly distributed throughout the year.[citation needed] It is usually cultivated between
1,300 and 1,500 m altitude, but there are plantations as low as sea level and as high as 2,800 m.[citation needed]
The plant can tolerate low temperatures, but not frost, and it does best when the temperature hovers
around 20°C (68°F).[citation needed] Commercial cultivars mostly only grow to about 5 m, and are frequently
trimmed as low as 2 m to facilitate harvesting. Unlike Coffea canephora, C. arabica prefers to be grown
in light shade.[citation needed]

238
Drawing of Coffea arabica

Two to four years after planting, C. arabica produces small, white and highly fragrant flowers. The sweet
fragrance resembles the sweet smell of jasmine flowers. When flowers open on sunny days, this results in
the greatest numbers of berries. This can be a curse, however, as coffee plants tend to produce too many
berries; this can lead to an inferior harvest and even damage yield in the following years, as the plant will
favour the ripening of berries to the detriment of its own health. On well kept plantations, this is
prevented by pruning the tree. The flowers themselves only last a few days, leaving behind only the thick
dark green leaves. The berries then begin to appear. These are as dark green as the foliage, until they
begin to ripen, at first to yellow and then light red and finally darkening to a glossy deep red. At this point
they are called 'cherries' and are ready for picking. The berries are oblong and about 1 cm long. Inferior
coffee results from picking them too early or too late, so many are picked by hand to be able to better
select them, as they do not all ripen at the same time. They are sometimes shaken off the tree onto mats,
which means that ripe and unripe berries are collected together.

The trees are difficult to cultivate and each tree can produce anywhere from 0.5–5.0 kg of dried beans,
depending on the tree's individual character and the climate that season. The real prize of this cash crop
are the beans inside. Each berry holds two locules containing the beans. The coffee beans are actually two
seeds within the fruit, there is sometimes a third seed or one seed, a peaberry in the fruits at tips of the
branches. These seeds are covered in two membranes, the outer one is called the "parchment coat" and the
inner one is called the "silver skin."

On Java Island, trees are planted at all times of the year and are harvested year round. In parts of Brazil,
however, the trees have a season and are harvested only in winter. The plants are vulnerable to damage in
poor growing conditions (cold, low pH soil) and are also more vulnerable to pests than the C. robusta
plant.[3] Gourmet coffees are almost exclusively high-quality mild varieties of arabica coffee, such as
Colombian coffee.

Arabica coffee production in Indonesia began in 1699. Indonesian coffees, such as Sumatran and Java, are
known for heavy body and low acidity. This makes them ideal for blending with the higher acidity coffees
from Central America and East Africa.

A Coffea arabica plantation in São João do Manhuaçu, Minas Gerais, Brazil.

239
Unroasted coffee(Coffea arabica) beans from Brazil.

[edit] History and legend

Main article: Coffee

According to legend, human cultivation of coffee began after goats in Ethiopia were seen mounting each
other after eating the leaves and fruits of the coffee tree. However, in Ethiopia there are still some locales
where people drink a tisane made from the leaves of the coffee tree.

The first written record of coffee made from roasted coffee beans comes from Arabian scholars who
wrote that it was useful in prolonging their working hours. The Arab innovation in Yemen of making a
brew from roasted beans, spread first among the Egyptians and Turks, and later on found its way around
the world.

[edit] Research

Structure of coffee berry and beans:


1: Center cut.
2: Bean(endosperm).
3: Silver skin(testa, epidermis).
4: Parchment coat(hull, endocarp).
5: Pectin layer.

240
6: Pulp(mesocarp).
7: Outer skin(pericarp, exocarp).

There is an Ethiopian Coffea arabica that naturally contains very little caffeine. Maria Bernadete
Silvarolla, a researcher of Instituto Agronomico de Campinas (IAC), published findings in the journal
Nature about these strains of C. arabica plants. While beans of normal C. arabica plants contain 12
milligrams of caffeine per gram of dry mass, these newly found mutants contain only 0.76 milligrams of
caffeine per gram, but with all the taste of normal coffee.

Colombia is the second-largest grower of C. arabica.[

Coffea

From Wikipedia, the free encyclopedia

Jump to: navigation, search

This article is about the biology of coffee plants. For the beverage, see Coffee.

Coffea

Coffea arabica trees in Brazil

Scientific classification

Kingdom: Plantae

(unranked): Angiosperms

(unranked): Eudicots

(unranked): Asterids

241
Order: Gentianales

Family: Rubiaceae

Subfamily: Ixoroideae

Tribe: Coffeeae[1]

Coffea
Genus:
L.

Type species

Coffea arabica
L.[2]

Species

Coffea ambongensis
Coffea anthonyi
Coffea arabica - Arabica Coffee
Coffea benghalensis - Bengal coffee
Coffea boinensis
Coffea bonnieri
Coffea canephora - Robusta coffee
Coffea charrieriana - Cameroonian
coffee
Coffea congensis - Congo coffee
Coffea dewevrei - Excelsa coffee
Coffea excelsa - Liberian coffee
Coffea gallienii
Coffea liberica - Liberian coffee
Coffea magnistipula
Coffea mauritiana - Café marron
Coffea mogeneti
Coffea stenophylla - Sierra Leonian
coffee

Coffea is a large genus (containing more than 90 species)[3] of flowering plants in the madder family,
Rubiaceae. They are shrubs or small trees, native to subtropical Africa and southern Asia. Seeds of
several species are the source of the popular beverage coffee. Coffee ranks as one of the world's most
valuable and widely traded commodity crops and is an important export of a number of countries. The
leaves and the outer part of the fruit are also sometimes eaten.[4]

[edit] Botany

242
Coffea berries, Bali

There are several species of Coffee that may be grown for the beans. The trees produce red or purple fruits
called "cherries" that are drupes. The cherries contain two seeds, the so-called "coffee beans", which —
despite their name — are not true beans (which are the seeds of the legume family). In about 5-10% of
any crop of coffee cherries, there is only a single bean, rather than the two usually found. This is called a
"peaberry", which is smaller and rounder than a normal coffee bean. It is often removed from the yield
and either sold separately, (as in New Guinea peaberry) or discarded.

Coffea canephora

When grown in the tropics, coffee is a vigorous bush or small tree which usually grow to a height of 3–
3.5 m (10–12 feet). Most commonly cultivated coffee species grow best at high elevations, but are
nevertheless intolerant of subfreezing temperatures.[citation needed]

The tree of Coffea arabica will grow fruits after three to five years, and will produce for about 50 to 60
years (although up to 100 years is possible). The white flowers are highly scented. The fruit takes about
nine months to ripen.

[edit] Ecology

The caffeine in coffee "beans" is a natural plant defense against herbivory, i.e. a toxic substance that
protects the seeds of the plant.

Several insect pests affect coffee production, including the coffee borer beetle (Hypothenemus hampei)
and the coffee leafminer Leucoptera caffeina.

243
Coffee is used as a food plant by the larvae of some Lepidoptera (butterfly and moth) species, including
napoleon jacutin (Dalcera abrasa), turnip moth and some members of the genus Endoclita, including E.
damor and E. malabaricus.

[edit] New coffee species

In 2008 and 2009, researchers from the Royal Botanic Gardens, Kew named seven species of Coffea from
the mountains of northern Madagascar, including C. ambongensis, C. boinensis, C. labatii, C. pterocarpa,
C. bissetiae, and C. namorokensis.[5]

Recently, two new species of coffee plants have been discovered in Cameroon: Coffea charrieriana,
which is caffeine-free, and Coffea anthonyi.[citation needed] By crossing the new species with other known
coffees, two new features might be introduced to cultivated coffee plants: beans without caffeine and self-
pollination.[6]

Common names

(Burmese) : ka-phi
(Creole) : kafe
(English) : Abyssinian coffee, Arabian coffee, arabica coffee, Brazilian coffee, coffee tree
(Filipino) : kafe
(French) : café, caféier
(German) : Bergkaffee
(Indonesian) : kopi
(Khmer) : kafae
(Spanish) : café, cafeto
(Swahili) : kahawa
(Thai) : gafae
(Trade name) : arabica coffee
(Vietnamese) : Cà phê

Botanic description
Coffea arabica is an evergreen, shrub or small tree, up to 5 m tall when unpruned, glabrous, with small
glossy leaves. Leaves are simple, alternate, opposite, thin, dark-green, shiny surfaced, fairly stiff; axillary
and sub-axillary buds often develop into reproductive lateral branches. Leaves petiolate, sometimes
bearing interpetiolar stipules. Prominent leaf midrib and lateral veins. Flowers produced in dense clusters
along reproductive branches in the axils of the leaves. White, sweet scented, star-shaped and carried on
stout but short peduncles. Bracteoles united, forming a cup-shaped epicalyx at the base of the flower.
There are 5 calyx segments halfway the length, spreading out very widely at the anthesis and 5 stamens
inserted in the corolla tube. Anthers carried on long, slender, upright filaments. Ovary inferior, 2 united
unilocular carpels, each containing a single ovule attached to the base of the carpel wall. The ovary bears
a slender style, which terminates in short, pointed bifid stigmas. Fruit a drupe; pericarp composed of
shiny exocarp, fleshy mesocarp and relatively thin but tough endocarp, in which the seeds are enclosed.
Immature berries dull green; on ripening the skin colour changes through yellow to bright crimson. Each
berry contains 2 seeds, 8.5-12.5 mm long, ellipsoidal in shape and pressed together by flattened surface
that is deeply grooved; outer surface convex. Thin, silvery testa follows outline of endosperm, so

244
fragments are often found in ventral groove after preparation. Seeds consist mainly of green corneous
endosperm, folded in a peculiar manner, and a small embryo near the base. Dried seeds, after removal of
the silvery skin, provide the coffee beans of commerce. The generic name is derived from the Arabic
word used for the drink, which may have come from the region of Kefa in Ethiopia.

Ecology and distribution


History of cultivation
Despite its name, C. arabica originated in Ethiopia, where it is an understorey tree in forests. It is believed
to have been introduced into Arabia before the 15th century. First planted in Java in 1690, and in the early
18th century was carried to Surinam, Martinique and Jamaica. Cultivation soon spread throughout the
West Indies and Central America and favourable regions of South America. Later, it reached India and Sri
Lanka. From Amsterdam, it was taken to the Philippines in 1740 and Hawaii in 1825. A single tree from
Edinburgh Botanic Gardens was taken to Malawi in 1878, and from there it was introduced into Uganda
in 1900 under the name ‘nyasa’. Catholic missionaries took it to Tanzania and Kenya at the end of the
19th century. The French fathers imported seeds from Aden in Yemen into Kenya in 1893. Today, nearly
90% of the world’s coffee comes from this C. arabica.
Natural Habitat
C. arabica thrives in a moderately humid atmosphere and prefers deep friable soil on undulating land; it is
unsuited to stiff clay or sandy soils and is considered tolerant of acid soils. It thrives at 1500-2000 m or
higher, ideally with rainfall 1500-2000 mm.
Geographic distribution
Native : Ethiopia, Mozambique
Exotic : Angola, Brazil, Burkina Faso, Colombia, Costa Rica, Democratic Republic of Congo, Dominican
Republic, El Salvador, India, Indonesia, Jamaica, Kenya, Madagascar, Malawi, Martinique, Mexico,
Philippines, Rwanda, Sri Lanka, Surinam, Tanzania, Uganda, United States of America

Biophysical limits
Altitude: 1 300-3 000 m, Mean annual temperature: 15-25 deg C, Mean annual rainfall: 1 500-2 000 mm
Soil type: Savannah soils of moderate acidity to neutral or slight alkalinity are suitable. Very sandy soils
and shallow soils are unsuitable for growing coffee. Soils should be deep, slightly acidic, well-drained
loams. They should be rich in nutrients especially potash and with a generous supply of organic matter.

Reproductive Biology
The plant is tetraploid, and over 30 mutations have been recognized. In the bisexual flowers, pollen is
shed shortly after the flower opens, and the stigma is receptive immediately. Self-pollination can occur, as
seed sets even when the flowers are bagged. Pollination is also by honeybees, which collect nectar and
pollen from the flowers. Dispersal is mainly by birds and mammals.

245
Heart

From Wikipedia, the free encyclopedia

Jump to: navigation, search

This article is about the organ in various animals. For the human heart, see Human heart. For other
uses, see Heart (disambiguation).

"Cardiac" redirects here. For the cardboard computer, see CARDboard Illustrative Aid to Computation.

This article needs additional citations for verification.


Please help improve this article by adding reliable references. Unsourced material may be
challenged and removed. (July 2009)

You can edit this article [show]Click for simple instructions:

The human heart

Normal heart sounds

Normal heart sounds as heard with a


stethoscope

Problems listening to this file? See media help.

246
The heart is a myogenic muscular organ found in all animals with a circulatory system (including all
vertebrates), that is responsible for pumping blood throughout the blood vessels by repeated, rhythmic
contractions. The term cardiac (as in cardiology) means "related to the heart" and comes from the Greek
καρδιά, kardia, for "heart".

The vertebrate heart is composed of cardiac muscle, which is an involuntary striated muscle tissue found
only in this organ, and connective tissue. The average human heart, beating at 72 beats per minute, will
beat approximately 2.5 billion times during an average 66 year lifespan, and weighs approximately 250 to
300 grams (9 to 11 oz) in females and 300 to 350 grams (11 to 12 oz) in males.[1]

In invertebrates that possess a circulatory system, the heart is typically a tube or small sac and pumps
fluid that contains water and nutrients such as proteins, fats, and sugars. In insects, the "heart" is often
called the dorsal tube and insect "blood" is almost always not oxygenated since they usually respirate
(breathe) directly from their body surfaces (internal and external) to air. However, the hearts of some
other arthropods (including spiders and crustaceans such as crabs and shrimp) and some other animals
pump hemolymph, which contains the copper-based protein hemocyanin as an oxygen transporter similar
to the iron-based hemoglobin in red blood cells found in vertebrates.

[edit] Early development

Main article: Heart development

The mammalian heart is derived from embryonic mesoderm germ-layer cells that differentiate after
gastrulation into mesothelium, endothelium, and myocardium. Mesothelial pericardium forms the outer
lining of the heart. The inner lining of the heart, lymphatic and blood vessels, develop from endothelium.
Heart muscle is termed myocardium.[2]

From splanchnopleuric mesoderm tissue, the cardiogenic plate develops cranially and laterally to the
neural plate. In the cardiogenic plate, two separate angiogenic cell clusters form on either side of the
embryo. Each cell cluster coalesces to form an endocardial tube continuous with a dorsal aorta and a
vitteloumbilical vein. As embryonic tissue continues to fold, the two endocardial tubes are pushed into the
thoracic cavity, begin to fuse together, and complete the fusing process at approximately 21 days.[3]

247
At 21 days after conception, the human heart begins beating at 70 to 80 beats per minute and accelerates
linearly for the first month of beating.

The human embryonic heart begins beating at around 21 days after conception, or five weeks after the last
normal menstrual period (LMP). The first day of the LMP is normally used to date the start of the
gestation (pregnancy). The human heart begins beating at a rate near the mother’s, about 75-80 beats per
minute (BPM).

The embryonic heart rate (EHR) then accelerates by approximately 100 BPM during the first month to
peak at 165-185 BPM during the early 7th week, (early 9th week after the LMP). This acceleration is
approximately 3.3 BPM per day, or about 10 BPM every three days, which is an increase of 100 BPM in
the first month.[4] After 9.1 weeks after the LMP, it decelerates to about 152 BPM (+/-25 BPM) during the
15th week post LMP. After the 15th week, the deceleration slows to an average rate of about 145 (+/-25
BPM) BPM, at term. The regression formula, which describes this acceleration before the embryo reaches
25 mm in crown-rump length, or 9.2 LMP weeks, is: the Age in days = EHR(0.3)+6. There is no
difference in female and male heart rates before birth.[5]

[edit] Structure

The structure of the heart varies among the different branches of the animal kingdom. (See Circulatory
system.) Cephalopods have two "gill hearts" and one "systemic heart". In vertebrates, the heart lies in the
anterior part of the body cavity, dorsal to the gut. It is always surrounded by a pericardium, which is
usually a distinct structure, but may be continuous with the peritoneum in jawless and cartilaginous fish.
Hagfishes, uniquely among vertebrates, also possess a second heart-like structure in the tail.[6]

[edit] In humans

Main article: Human heart

248
Structure diagram of the human heart from an anterior view. Blue components indicate de-oxygenated
blood pathways and red components indicate oxygenated pathways.

The human heart has a mass of between 250 and 350 grams and is about the size of a fist.[7] It is located
anterior to the vertebral column and posterior to the sternum.

It is enclosed in a double-walled sac called the pericardium. The superficial part of this sac is called the
fibrous pericardium. This sac protects the heart, anchors its surrounding structures, and prevents
overfilling of the heart with blood.

The outer wall of the human heart is composed of three layers. The outer layer is called the epicardium, or
visceral pericardium since it is also the inner wall of the pericardium. The middle layer is called the
myocardium and is composed of muscle which contracts. The inner layer is called the endocardium and is
in contact with the blood that the heart pumps. Also, it merges with the inner lining (endothelium) of
blood vessels and covers heart valves.[8]

The human heart has four chambers, two superior atria and two inferior ventricles. The atria are the
receiving chambers and the ventricles are the discharging chambers. The right ventricle discharges into
the lungs to oxygenate the blood. The left ventricle discharges its blood toward the rest of the body via the
aorta.

The pathway of blood through the human heart consists of a pulmonary circuit and a systemic circuit.
Blood flows through the heart in one direction, from the atria to the ventricles, and out of the great
arteries, or the aorta for example. This is done by four valves which are the tricuspid valve, the mitral
valve, the aortic valve, and the pulmonary valve.[9]

[edit] In fish

Schematic of simplified fish heart

249
Primitive fish have a four-chambered heart, but the chambers are arranged sequentially so that this
primitive heart is quite unlike the four-chambered hearts of mammals and birds. The first chamber is the
sinus venosus, which collects de-oxygenated blood, from the body, through the hepatic and cardinal
veins. From here, blood flows into the atrium and then to the powerful muscular ventricle where the main
pumping action will take place. The fourth and final chamber is the conus arteriosus which contains
several valves and sends blood to the ventral aorta. The ventral aorta delivers blood to the gills where it is
oxygenated and flows, through the dorsal aorta, into the rest of the body. (In tetrapods, the ventral aorta
has divided in two; one half forms the ascending aorta, while the other forms the pulmonary artery).[6]

In the adult fish, the four chambers are not arranged in a straight row but, instead, form an S-shape with
the latter two chambers lying above the former two. This relatively simpler pattern is found in
cartilaginous fish and in the ray-finned fish. In teleosts, the conus arteriosus is very small and can more
accurately be described as part of the aorta rather than of the heart proper. The conus arteriosus is not
present in any amniotes, presumably having been absorbed into the ventricles over the course of
evolution. Similarly, while the sinus venosus is present as a vestigial structure in some reptiles and birds,
it is otherwise absorbed into the right atrium and is no longer distinguishable.[6]

[edit] In double circulatory systems

In amphibians and most reptiles, a double circulatory system is used but the heart is not completely
separated into two pumps. The development of the double system is necessitated by the presence of lungs
which deliver oxygenated blood directly to the heart.

In living amphibians, the atrium is divided into two separate chambers by the presence of a muscular
septum even though there is only a single ventricle. The sinus venosus, which remains large in
amphibians but connects only to the right atrium, receives blood from the vena cavae, with the pulmonary
vein by-passing it entirely to enter the left atrium.

In the heart of lungfish, the septum extends part-way into the ventricle. This allows for some degree of
separation between the de-oxygenated bloodstream destined for the lungs and the oxygenated stream that
is delivered to the rest of the body. The absence of such a division in living amphibian species may be at
least partly due to the amount of respiration that occurs through the skin in such species; thus, the blood
returned to the heart through the vena cavae is, in fact, already partially oxygenated. As a result, there
may be less need for a finer division between the two bloodstreams than in lungfish or other tetrapods.
Nonetheless, in at least some species of amphibian, the spongy nature of the ventricle seems to maintain
more of a separation between the bloodstreams than appears the case at first glance. Furthermore, the
conus arteriosus has lost its original valves and contains a spiral valve, instead, that divides it into two
parallel parts, thus helping to keep the two bloodstreams separate.[6]

The heart of most reptiles (except for crocodilians; see below) has a similar structure to that of lungfish
but, here, the septum is generally much larger. This divides the ventricle into two halves but, because the
septum does not reach the whole length of the heart, there is a considerable gap near the openings to the
pulmonary artery and the aorta. In practice, however, in the majority of reptilian species, there appears to
be little, if any, mixing between the bloodstreams, so the aorta receives, essentially, only oxygenated
blood.[6]

[edit] The fully divided heart

250
Human heart removed from a 64-year-old male

Surface anatomy of the human heart. The heart is demarcated by:


-A point 9 cm to the left of the midsternal line (apex of the heart)
-The seventh right sternocostal articulation
-The upper border of the third right costal cartilage 1 cm from the right sternal line
-The lower border of the second left costal cartilage 2.5 cm from the left lateral sternal line.[10]

Archosaurs, (crocodilians, birds), and mammals show complete separation of the heart into two pumps for
a total of four heart chambers; it is thought that the four-chambered heart of archosaurs evolved
independently from that of mammals. In crocodilians, there is a small opening, the foramen of Panizza, at
the base of the arterial trunks and there is some degree of mixing between the blood in each side of the
heart; thus, only in birds and mammals are the two streams of blood - those to the pulmonary and
systemic circulations - kept entirely separate by a physical barrier.[6]

In the human body, the heart is usually situated in the middle of the thorax with the largest part of the
heart slightly offset to the left, although sometimes it is on the right (see dextrocardia), underneath the

251
sternum. The heart is usually felt to be on the left side because the left heart (left ventricle) is stronger (it
pumps to all body parts). The left lung is smaller than the right lung because the heart occupies more of
the left hemithorax. The heart is fed by the coronary circulation and is enclosed by a sac known as the
pericardium; it is also surrounded by the lungs. The pericardium comprises two parts: the fibrous
pericardium, made of dense fibrous connective tissue, and a double membrane structure (parietal and
visceral pericardium) containing a serous fluid to reduce friction during heart contractions. The heart is
located in the mediastinum, which is the central sub-division of the thoracic cavity. The mediastinum also
contains other structures, such as the esophagus and trachea, and is flanked on either side by the right and
left pulmonary cavities; these cavities house the lungs.[11]

The apex is the blunt point situated in an inferior (pointing down and left) direction. A stethoscope can be
placed directly over the apex so that the beats can be counted. It is located posterior to the 5th intercostal
space just medial of the left mid-clavicular line. In normal adults, the mass of the heart is 250-350 g (9-
12 oz), or about twice the size of a clenched fist (it is about the size of a clenched fist in children), but an
extremely diseased heart can be up to 1000 g (2 lb) in mass due to hypertrophy. It consists of four
chambers, the two upper atria and the two lower ventricles.

[edit] Functioning

Blood flow diagram of the human heart. Blue components indicate de-oxygenated blood pathways and
red components indicate oxygenated pathways.

252
Image showing the conduction system of the heart

In mammals, the function of the right side of the heart (see right heart) is to collect de-oxygenated blood,
in the right atrium, from the body (via superior and inferior vena cavae) and pump it, through the tricuspid
valve, via the right ventricle, into the lungs (pulmonary circulation) so that carbon dioxide can be dropped
off and oxygen picked up (gas exchange). This happens through the passive process of diffusion. The left
side (see left heart) collects oxygenated blood from the lungs into the left atrium. From the left atrium the
blood moves to the left ventricle, through the bicuspid valve, which pumps it out to the body (via the
aorta). On both sides, the lower ventricles are thicker and stronger than the upper atria. The muscle wall
surrounding the left ventricle is thicker than the wall surrounding the right ventricle due to the higher
force needed to pump the blood through the systemic circulation.

Starting in the right atrium, the blood flows through the tricuspid valve to the right ventricle. Here, it is
pumped out the pulmonary semilunar valve and travels through the pulmonary artery to the lungs. From
there, oxygenated blood flows back through the pulmonary vein to the left atrium. It then travels through
the mitral valve to the left ventricle, from where it is pumped through the aortic semilunar valve to the
aorta. The aorta forks and the blood is divided between major arteries which supply the upper and lower
body. The blood travels in the arteries to the smaller arterioles and then, finally, to the tiny capillaries
which feed each cell. The (relatively) deoxygenated blood then travels to the venules, which coalesce into
veins, then to the inferior and superior venae cavae and finally back to the right atrium where the process
began.

The heart is effectively a syncytium, a meshwork of cardiac muscle cells interconnected by contiguous
cytoplasmic bridges. This relates to electrical stimulation of one cell spreading to neighboring cells.

Some cardiac cells are self-excitable, contracting without any signal from the nervous system, even if
removed from the heart and placed in culture. Each of these cells have their own intrinsic contraction
rhythm. A region of the human heart called the sinoatrial node, or pacemaker, sets the rate and timing at
which all cardiac muscle cells contract. The SA node generates electrical impulses, much like those
produced by nerve cells. Because cardiac muscle cells are electrically coupled by inter-calated disks
between adjacent cells, impulses from the SA node spread rapidly through the walls of the artria, causing
both artria to contract in unison. The impulses also pass to another region of specialized cardiac muscle
tissue, a relay point called the atrioventricular node, located in the wall between the right atrium and the
right ventricle. Here, the impulses are delayed for about 0.1s before spreading to the walls of the

253
ventricle. The delay ensures that the artria empty completely before the ventricles contract. Specialized
muscle fibers called Purkinje fibers then conduct the signals to the apex of the heart along and throughout
the ventricular walls. The Purkinje fibres form conducting pathways called bundle branches. This entire
cycle, a single heart beat, lasts about 0.8 seconds. The impulses generated during the heart cycle produce
electrical currents, which are conducted through body fluids to the skin, where they can be detected by
electrodes and recorded as an electrocardiogram (ECG or EKG).[12] The events related to the flow or
blood pressure that occurs from the beginning of one heartbeat to the beginning of the next can be
referred to a cardiac cycle.[13]

The SA node is found in all amniotes but not in more primitive vertebrates. In these animals, the muscles
of the heart are relatively continuous and the sinus venosus coordinates the beat which passes in a wave
through the remaining chambers. Indeed, since the sinus venosus is incorporated into the right atrium in
amniotes, it is likely homologous with the SA node. In teleosts, with their vestigial sinus venosus, the
main centre of coordination is, instead, in the atrium. The rate of heartbeat varies enormously between
different species, ranging from around 20 beats per minute in codfish to around 600 in hummingbirds.[6]

Cardiac arrest is the sudden cessation of normal heart rhythm which can include a number of pathologies
such as tachycardia, an extremely rapid heart beat which prevents the heart from effectively pumping
blood, fibrillation, which is an irregular and ineffective heart rhythm, and asystole, which is the cessation
of heart rhythm entirely.

Cardiac tamponade is a condition in which the fibrous sac surrounding the heart fills with excess fluid or
blood, suppressing the heart's ability to beat properly. Tamponade is treated by pericardiocentesis, the
gentle insertion of the needle of a syringe into the pericardial sac (avoiding the heart itself) on an angle,
usually from just below the sternum, and gently withdrawing the tamponading fluids.

[edit] History of discoveries

A preserved human heart with a visible gunshot wound

254
The valves of the heart were discovered by a physician of the Hippocratean school around the 4th century
BC. However, their function was not properly understood then. Because blood pools in the veins after
death, arteries look empty. Ancient anatomists assumed they were filled with air and that they were for
transport of air.

Philosophers distinguished veins from arteries, but thought the pulse was a property of arteries
themselves. Erasistratos observed the arteries that were cut during life bleed. He ascribed the fact to the
phenomenon that air escaping from an artery is replaced with blood which entered by very small vessels
between veins and arteries. Thus he apparently postulated capillaries, but with reversed flow of blood.

The 2nd century AD, Greek physician Galenos (Galen) knew blood vessels carried blood and identified
venous (dark red) and arterial (brighter and thinner) blood, each with distinct and separate functions.
Growth and energy were derived from venous blood created in the liver from chyle, while arterial blood
gave vitality by containing pneuma (air) and originated in the heart. Blood flowed from both creating
organs to all parts of the body, where it was consumed and there was no return of blood to the heart or
liver. The heart did not pump blood around, the heart's motion sucked blood in during diastole and the
blood moved by the pulsation of the arteries themselves.

Galen believed the arterial blood was created by venous blood passing from the left ventricle to the right
through 'pores' in the interventricular septum, while air passed from the lungs via the pulmonary artery to
the left side of the heart. As the arterial blood was created, 'sooty' vapors were created and passed to the
lungs, also via the pulmonary artery, to be exhaled

Cardiac surgery

From Wikipedia, the free encyclopedia

Jump to: navigation, search

Two cardiac surgeons performing a cardiac surgery known as coronary artery bypass surgery. Note the
use of a steel retractor used to forcefully maintain the exposure of the patient's heart.

Cardiovascular surgery is a surgery on the heart and/or great vessels performed by cardiac surgeons.
Frequently, it is done to treat complications of ischemic heart disease (for example, coronary artery

255
bypass grafting), correct congenital heart disease, or treat valvular heart disease caused by various causes
including endocarditis. It also includes heart transplantation.

[edit] History

The earliest operations on the pericardium (the sac that surrounds the heart) took place in the 19th century
and were performed by, Francisco Romero[1] Dominique Jean Larrey, Henry Dalton, and Daniel Hale
Williams. The first surgery on the heart itself was performed by Norwegian surgeon Axel Cappelen on
the 4th of September 1895 at Rikshospitalet in Kristiania, now Oslo. He ligated a bleeding coronary artery
in a 24 year old man who had been stabbed in the left axillae and was in deep shock upon arrival. Access
was through a left thoracotomy. The patient awoke and seemed fine for 24hrs, but became ill with
increasing temperature and he ultimately died from what the post mortem proved to be mediastinitis on
the 3rd postoperative day[2][3]. The first successful surgery of the heart, performed without any
complications, was by Dr. Ludwig Rehn of Frankfurt, Germany, who repaired a stab wound to the right
ventricle on September 7, 1896.[citation needed]

Surgery in great vessels (aortic coarctation repair, Blalock-Taussig shunt creation, closure of patent
ductus arteriosus), became common after the turn of the century and falls in the domain of cardiac
surgery, but technically cannot be considered heart surgery.

[edit] Heart Malformations – Early Approaches

In 1925 operations on the valves of the heart were unknown. Henry Souttar operated successfully on a
young woman with mitral stenosis. He made an opening in the appendage of the left atrium and inserted a
finger into this chamber in order to palpate and explore the damaged mitral valve. The patient survived
for several years[4] but Souttar’s physician colleagues at that time decided the procedure was not justified
and he could not continue[5][6].

Cardiac surgery changed significantly after World War II. In 1948 four surgeons carried out successful
operations for mitral stenosis resulting from rheumatic fever. Horace Smithy (1914–1948) of Charlotte,
revived an operation due to Dr Dwight Harken of the Peter Bent Brigham Hospital using a punch to
remove a portion of the mitral valve. Charles Bailey (1910–1993) at the Hahnemann Hospital,
Philadelphia, Dwight Harken in Boston and Russell Brock at Guy’s Hospital all adopted Souttar’s
method. All these men started work independently of each other, within a few months. This time Souttar’s
technique was widely adopted although there were modifications[5][6].

In 1947 Thomas Holmes Sellors (1902–1987) of the Middlesex Hospital operated on a Fallot’s Tetralogy
patient with pulmonary stenosis and successfully divided the stenosed pulmonary valve. In 1948, Russell
Brock, probably unaware of Sellor’s work, used a specially designed dilator in three cases of pulmonary
stenosis. Later in 1948 he designed a punch to resect the infundibular muscle stenosis which is often
associated with Fallot’s Tetralogy. Many thousands of these “blind” operations were performed until the
introduction of heart bypass made direct surgery on valves possible[5].

[edit] Open heart surgery

This is a surgery in which the patient's heart is opened and surgery is performed on the internal structures
of the heart.

It was soon discovered by Dr. Wilfred G. Bigelow of the University of Toronto that the repair of
intracardiac pathologies was better done with a bloodless and motionless environment, which means that

256
the heart should be stopped and drained of blood. The first successful intracardiac correction of a
congenital heart defect using hypothermia was performed by Dr. C. Walton Lillehei and Dr. F. John
Lewis at the University of Minnesota on September 2, 1952. The following year, Soviet surgeon
Aleksandr Aleksandrovich Vishnevskiy conducted the first cardiac surgery under local anesthesia.

Surgeons realized the limitations of hypothermia – complex intracardiac repairs take more time and the
patient needs blood flow to the body (and particularly the brain); the patient needs the function of the
heart and lungs provided by an artificial method, hence the term cardiopulmonary bypass. Dr. John
Heysham Gibbon at Jefferson Medical School in Philadelphia reported in 1953 the first successful use of
extracorporeal circulation by means of an oxygenator, but he abandoned the method, disappointed by
subsequent failures. In 1954 Dr. Lillehei realized a successful series of operations with the controlled
cross-circulation technique in which the patient's mother or father was used as a 'heart-lung machine'. Dr.
John W. Kirklin at the Mayo Clinic in Rochester, Minnesota started using a Gibbon type pump-
oxygenator in a series of successful operations, and was soon followed by surgeons in various parts of the
world.

Dr. Nazih Zuhdi worked for four years under Drs. Clarence Dennis, Karl Karlson, and Charles Fries, who
built an early pump-oxygenator. Zuhdi and Fries worked on several designs and re-designs of Dennis'
earlier model from 1952–1956 at the Brooklyn Center. Zuhdi then went to work with Dr. C. Walton
Lillehei at the University of Minnesota. Lillehei had designed his own version of a cross-circulation
machine, which came to become known as the DeWall-Lillehei heart-lung machine. Zuhdi worked on
perfusion and blood flow trying to solve the problem of air bubbles while bypassing the heart so the heart
could be stopped for the operation. Zuhdi moved to Oklahoma City, OK, in 1957, and began working at
the Oklahoma University College. Zuhdi, the heart surgeon, teamed up with Dr. Allen Greer, a lung
surgeon and Dr. John Carey, forming a three man open heart surgery team. With the advent of Dr. Zuhdi's
heart-lung machine which was modified in size, being much smaller than the DeWall-Lillehei heart-lung
machine, and with other modifications, reduced the need for blood down to a minimal amount, and the
cost of the equipment down to $500.00 and also reduced the prep time from two hours to 20 minutes. Dr.
Zuhdi performed the first Total Intentional Hemodilution open heart surgery on Terry Gene Nix, age 7, on
February 25, 1960, at Mercy Hospital, Oklahoma City, OK. The operation was a success; however, Nix
died three years later in 1963.[7] In March, 1961, Zuhdi, Carey, and Greer, performed open heart surgery
on a child, age 3½, using the Total Intentional Hemodilution machine, with success. That patient is still
alive.[8]

In 1985 Dr. Zuhdi performed Oklahoma's first successful heart transplant on Nancy Rogers at Baptist
Hospital. The transplant was successful, but Rogers, a cancer sufferer, died from an infection 54 days
after surgery.[9]

[edit] Modern beating-heart surgery

Since the 1990s, surgeons have begun to perform "off-pump bypass surgery" – coronary artery bypass
surgery without the aforementioned cardiopulmonary bypass. In these operations, the heart is beating
during surgery, but is stabilized to provide an almost still work area. Some researchers believe this
approach results in fewer post-operative complications (such as postperfusion syndrome) and better
overall results (study results are controversial as of 2007, the surgeon's preference and hospital results still
play a major role).

[edit] Minimally invasive surgery

257
A new form of heart surgery that has grown in popularity is robot-assisted heart surgery. This is where a
machine is used to perform surgery while being controlled by the heart surgeon. The main advantage to
this is the size of the incision made in the patient. Instead of an incision being at least big enough for the
surgeon to put his hands inside, it does not have to be bigger than 3 small holes for the robot's much
smaller hands to get through.

[edit] Pediatric Cardiovascular Surgery

Pediatric Cardiovascular Surgery is surgery of the heart of children. Russell M. Nelson performed the first
successful pediatric cardiac operation at the Salt Lake General Hospital in March 1956, a total repair of
tetralogy of Fallot in a four-year-old girl.[10]

[edit] Risks

The development of cardiac surgery and cardiopulmonary bypass techniques has reduced the mortality
rates of these surgeries to relatively low ranks. For instance, repairs of congenital heart defects are
currently estimated to have 4–6% mortality rates.[11][12]

A major concern with cardiac surgery is the incidence of neurological damage. Stroke occurs in 2–3% of
all people undergoing cardiac surgery, and is higher in patients at risk for stroke.[citation needed] A more subtle
constellation of neurocognitive deficits attributed to cardiopulmonary bypass is known as postperfusion
syndrome (sometimes called 'pumphead'). The symptoms of postperfusion syndrome were initially felt to
be permanent,[13] but were shown to be transient with no permanent neurological impairment.[14]

Heart disease

From Wikipedia, the free encyclopedia

Jump to: navigation, search

Heart disease

Classification and external resources

Micrograph a heart with fibrosis (yellow) and


amyloidosis (brown). Movat's stain.

258
ICD-10 I00-I52

ICD-9 390-429

MeSH D006331

Heart disease or cardiopathy is an umbrella term for a variety of diseases affecting the heart. As of
2007, it is the leading cause of death in the United States,[1][2] England, Canada and Wales,[3] accounting
for 25.4% of the total deaths in the United States.[4]

Types

Coronary heart disease

Main article: Coronary heart disease

Coronary heart disease refers to the failure of the coronary circulation to supply adequate circulation to
cardiac muscle and surrounding tissue. Coronary heart disease is most commonly equated with Coronary
artery disease although coronary heart disease can be due to other causes, such as coronary vasospasm.[5]

Coronary artery disease is a disease of the artery caused by the accumulation of atheromatous plaques
within the walls of the arteries that supply the myocardium. Angina pectoris (chest pain) and myocardial
infarction (heart attack) are symptoms of and conditions caused by coronary heart disease.

Over 459,000 Americans die of coronary heart disease every year.[6] In the United Kingdom, 101,000
deaths annually are due to coronary heart disease.[7]

Cardiomyopathy

Main article: Cardiomyopathy

Cardiomyopathy literally means "heart muscle disease" (Myo= muscle, pathy= disease) It is the
deterioration of the function of the myocardium (i.e., the actual heart muscle) for any reason. People with
cardiomyopathy are often at risk of arrhythmia and/or sudden cardiac death.

 Extrinsic cardiomyopathies – cardiomyopathies where the primary pathology is outside the


myocardium itself. Most cardiomyopathies are extrinsic, because by far the most common cause
of a cardiomyopathy is ischemia. The World Health Organization calls these specific
cardiomyopathies[citation needed]:
o Alcoholic cardiomyopathy
o Coronary artery disease
o Congenital heart disease
o Nutritional diseases affecting the heart

259
o Ischemic (or ischaemic) cardiomyopathy
o Hypertensive cardiomyopathy
o Valvular cardiomyopathy – see also Valvular heart disease below
o Inflammatory cardiomyopathy – see also Inflammatory heart disease below
o Cardiomyopathy secondary to a systemic metabolic disease
o Myocardiodystrophy
 Intrinsic cardiomyopathies – weakness in the muscle of the heart that is not due to an identifiable
external cause.
o Dilated cardiomyopathy (DCM) – most common form, and one of the leading indications
for heart transplantation. In DCM the heart (especially the left ventricle) is enlarged and
the pumping function is diminished.
o Hypertrophic cardiomyopathy (HCM or HOCM) – genetic disorder caused by various
mutations in genes encoding sarcomeric proteins. In HCM the heart muscle is thickened,
which can obstruct blood flow and prevent the heart from functioning properly.
o Arrhythmogenic right ventricular cardiomyopathy (ARVC) – arises from an electrical
disturbance of the heart in which heart muscle is replaced by fibrous scar tissue. The right
ventricle is generally most affected.
o Restrictive cardiomyopathy (RCM) – least common cardiomyopathy. The walls of the
ventricles are stiff, but may not be thickened, and resist the normal filling of the heart
with blood.
o Noncompaction Cardiomyopathy – the left ventricle wall has failed to properly grow
from birth and such has a spongy appearance when viewed during an echocardiogram.

Cardiovascular disease

Main article: Cardiovascular disease

Cardiovascular disease is any of a number of specific diseases that affect the heart itself and/or the blood
vessel system, especially the veins and arteries leading to and from the heart. Research on disease
dimorphism suggests that women who suffer with cardiovascular disease usually suffer from forms that
affect the blood vessels while men usually suffer from forms that affect the heart muscle itself. Known or
associated causes of cardiovascular disease include diabetes mellitus, hypertension,
hyperhomocysteinemia and hypercholesterolemia.

Types of cardiovascular disease include:

 Atherosclerosis

Ischaemic heart disease

 Ischaemic heart disease – another disease of the heart itself, characterized by reduced blood
supply to the organs.

Heart failure

Main article: Heart failure

Heart failure, also called congestive heart failure (or CHF), and congestive cardiac failure (CCF), is a
condition that can result from any structural or functional cardiac disorder that impairs the ability of the

260
heart to fill with or pump a sufficient amount of blood throughout the body. Therefore leading to the heart
and body's failure.

 Cor pulmonale, a failure of the right side of the heart.

Hypertensive heart disease

Main article: Hypertensive heart disease

Hypertensive heart disease is heart disease caused by high blood pressure, especially localised high blood
pressure. Conditions that can be caused by hypertensive heart disease include:

 Left ventricular hypertrophy


 Coronary heart disease
 (Congestive) heart failure
 Hypertensive cardiomyopathy
 Cardiac arrhythmias

Inflammatory heart disease

Inflammatory heart disease involves inflammation of the heart muscle and/or the tissue surrounding
it.

 Endocarditis – inflammation of the inner layer of the heart, the endocardium. The most common
structures involved are the heart valves.
 Inflammatory cardiomegaly
 Myocarditis – inflammation of the myocardium, the muscular part of the heart.

Valvular heart disease

Main article: Valvular heart disease

Valvular heart disease is disease process that affects one or more valves of the heart. There are four major
heart valve which may be affected by valvular heart disease, including the tricuspid and aortic valves in
the right side of the heart, as well as the mitral and aortic valves in the left side of the heart.

Blue baby syndrome

From Wikipedia, the free encyclopedia

Jump to: navigation, search

261
A cyanotic newborn, or "blue baby"

Blue baby syndrome (or simply, blue baby) is a layman's term used to describe newborns with cyanotic
conditions, such as

 Cyanotic heart defects


o Tetralogy of Fallot[1]
o Dextro-Transposition of the great arteries

Complete Atrio-Ventricular Septal Defect

 Tricuspid atresia
 Methemoglobinemia[2][3]
 Respiratory distress syndrome

Blue baby syndrome can also be caused by Methemoglobinemia. It is believed to be caused by high
nitrate contamination in ground water resulting in decreased oxygen carrying capacity of hemoglobin in
babies leading to death. The groundwater is thought to be contaminated by leaching of nitrate generated
from fertilizer used in agricultural lands and waste dumps [4]. It may also be related to some pesticides
(DDT, PCBs etc), which cause ecotoxicological problems in the food chains of living organisms,
increasing BOD, which kills aquatic animals.

Blue baby syndrome, also known as simply blue baby, is a term used to describe infants with cyanosis, or
blue-tinted skin. The condition develops when the organs, cells and tissues do not receive adequate
oxygen, and the blood flowing through the body is blue in color instead of red, indicating poor oxygen
levels. Although blue baby syndrome can be fatal if left untreated, modern treatments can typically
correct the problem. The syndrome occurs most frequently in infants under six months of age, but it can
also affect older children and adults in rare cases.

The most common cause of blue baby syndrome is Tetralogy of Fallot, a congenital heart disease in
which four different abnormalities of the heart cause a reduction in blood oxygen. These abnormalities
include ventricular septal defect (VSD), pulmonary stenosis, thickening of the right ventricle and a
displaced or deviated aorta. VSD is characterized by a hole in the wall of the two lower chambers of the
heart. Pulmonary stenosis occurs when the pulmonic valve and the muscular area below the valve are
narrower than normal.

262
Blue baby syndrome may also be caused by excessive nitrates in drinking water. When consumed, the
nitrates are converted to nitrite in the digestive system. The nitrites then react with the hemoglobin,
causing dangerously high levels of methemoglobin. This enzyme cannot carry oxygen through the blood
as hemoglobin does, resulting in organs, cells and tissues that are deprived of oxygen and skin with the
characteristic bluish-tint. Rural areas where high levels of nitrates are present in the ground water often
produce higher numbers of infants born with this condition.

Other causes of blue baby syndrome are also related to congenital heart problems. Transposition of the
great arteries (TGA) occurs when the aorta and pulmonary artery are transposed or switched, causing
oxygen-poor blood to be carried throughout the body instead of being sent to the lungs. Hypoplastic left
heart syndrome (HLHS) can also result in blue baby syndrome, and is caused when the left side of the
heart is underdeveloped, resulting in a left ventricle that does not pump enough oxygen-rich blood
through the body.

Blue-tinted skin may be difficult to recognize in children with darker skin colors, but there are other
symptoms to alert parents and doctors to a potential problem. The other symptoms of blue baby syndrome
include fatigue, low tolerance for exercise, rapid breathing and shortness of breath, difficulty breathing or
eating, and heart murmurs. The child may also fail to gain weight and appear lethargic for no apparent

[edit] Other Causes

 Persistent Truncus Arteriosus

[edit] Surgery

On November 29, 1944, the Johns Hopkins Hospital was the first to successfully perform an operation to
relieve Tetralogy of Fallot.[5] The syndrome was brought to the attention of surgeon Alfred Blalock and
his laboratory assistant Vivien Thomas in 1943 by pediatric cardiologist Helen Taussig, who had treated
hundreds of children with Tetralogy of Fallot in her work at Hopkins' Harriet Lane Home for Invalid
Children. The two men adapted a surgical procedure they had earlier developed for another purpose,
involving the anastomosis, or joining, of the subclavian artery to the pulmonary artery, which allowed the
blood another chance to become oxygenated. The procedure became known as the Blalock-Taussig shunt,
although in recent years the contribution of Vivien Thomas, both experimentally and clinically, has been
widely acknowledged.

263


Metal

From Wikipedia, the free encyclopedia

Jump to: navigation, search

This article is about metallic materials. For other uses, see Metal (disambiguation).

Some metal pieces

A metal is a chemical element that is a good conductor of both electricity and heat and forms cations and
ionic bonds with non-metals.

In chemistry, a metal (from Greek "μέταλλον" - métallon, "mine"[1]) is an element, compound, or alloy
characterized by high electrical conductivity. In a metal, atoms readily lose electrons to form positive ions
(cations). Those ions are surrounded by delocalized electrons, which are responsible for the conductivity.
The solid thus produced is held by electrostatic interactions between the ions and the electron cloud,
which are called metallic bonds.[2]

Usage in astronomy is quite different.

[edit] Definition

This section does not cite any references or sources.


Please help improve this article by adding citations to reliable sources. Unsourced material
may be challenged and removed. (July 2010)

Metals are sometimes described as an arrangement of positive ions surrounded by a sea of delocalized
electrons. They are one of the three groups of elements as distinguished by their ionization and bonding
properties, along with the metalloids and non-metals.

Metals occupy the bulk of the periodic table, while non-metallic elements can only be found on the right-
hand-side of the Periodic Table of the Elements. A diagonal line, drawn from boron (B) to polonium (Po),
separates the metals from the nonmetals. Most elements on this line are metalloids, sometimes called

264
semiconductors. This is because these elements exhibit electrical properties common to both conductors
and insulators. Elements to the lower left of this division line are called metals, while elements to the
upper right of the division line are called non-metals.

An alternative definition of metal refers to the band theory. If one fills the energy bands of a material with
available electrons and ends up with a top band partly filled then the material is a metal. This definition
opens up the category for metallic polymers and other organic metals, which have been made by
researchers and employed in high-tech devices. These synthetic materials often have the characteristic
silvery gray reflectiveness (luster) of elemental metals.

[edit] Astronomy

Main article: Metallicity

In the specialized usage of astronomy and astrophysics, the term "metal" is often used to refer collectively
to all elements other than hydrogen or helium, including substances as chemically non-metallic as neon,
fluorine, and oxygen. Nearly all the hydrogen and helium in the Universe was created in Big Bang
nucleosynthesis, whereas all the "metals" were produced by nucleosynthesis in stars or supernovae. The
Sun and the Milky Way Galaxy are composed of roughly 74% hydrogen, 24% helium, and 2% "metals"
(the rest of the elements; atomic numbers 3-118) by mass.[3]

The concept of a metal in the usual chemical sense is irrelevant in stars, as the chemical bonds that give
elements their properties cannot exist at stellar temperatures.

[edit] Properties

[edit] Chemical

Metals are usually inclined to form cations through electron loss,[2] reacting with oxygen in the air to form
oxides over changing timescales (iron rusts over years, while potassium burns in seconds). Examples:

4 Na + O2 → 2 Na2O (sodium oxide)

2 Ca + O2 → 2 CaO (calcium oxide)

4 Al + 3 O2 → 2 Al2O3 (aluminium oxide)

The transition metals (such as iron, copper, zinc, and nickel) take much longer to oxidize. Others, like
palladium, platinum and gold, do not react with the atmosphere at all. Some metals form a barrier layer of
oxide on their surface which cannot be penetrated by further oxygen molecules and thus retain their shiny
appearance and good conductivity for many decades (like aluminium, magnesium, some steels, and
titanium). The oxides of metals are generally basic, as opposed to those of nonmetals, which are acidic.

Painting, anodizing or plating metals are good ways to prevent their corrosion. However, a more reactive
metal in the electrochemical series must be chosen for coating, especially when chipping of the coating is
expected. Water and the two metals form an electrochemical cell, and if the coating is less reactive than
the coatee, the coating actually promotes corrosion.

[edit] Physical

265
Gallium crystals

Metals in general have high electrical conductivity, thermal conductivity, luster and density, and the
ability to be deformed under stress without cleaving.[2] While there are several metals that have low
density, hardness, and melting points, these (the alkali and alkaline earth metals) are extremely reactive,
and are rarely encountered in their elemental, metallic form. Optically speaking, metals are opaque, shiny
and lustrous. This is because visible lightwaves are not readily transmitted through the bulk of their
microstructure. The large number of free electrons in any typical metallic solid (element or alloy) is
responsible for the fact that they can never be categorized as transparent materials.

The majority of metals have higher densities than the majority of nonmetals.[2] Nonetheless, there is wide
variation in the densities of metals; lithium is the least dense solid element and osmium is the densest.
The metals of groups I A and II A are referred to as the light metals because they are exceptions to this
generalization.[2] The high density of most metals is due to the tightly packed crystal lattice of the metallic
structure. The strength of metallic bonds for different metals reaches a maximum around the center of the
transition metal series, as those elements have large amounts of delocalized electrons in tight binding type
metallic bonds. However, other factors (such as atomic radius, nuclear charge, number of bonding
orbitals, overlap of orbital energies, and crystal form) are involved as well.[2]

[edit] Electrical

The electrical and thermal conductivity of metals originate from the fact that in the metallic bond, the
outer electrons of the metal atoms form a gas of nearly free electrons, moving as an electron gas in a
background of positive charge formed by the ion cores. Good mathematical predictions for electrical
conductivity, as well as the electrons' contribution to the heat capacity and heat conductivity of metals can
be calculated from the free electron model, which does not take the detailed structure of the ion lattice
into account.

When considering the exact band structure and binding energy of a metal, it is necessary to take into
account the positive potential caused by the specific arrangement of the ion cores - which is periodic in
crystals. The most important consequence of the periodic potential is the formation of a small band gap at
the boundary of the Brillouin zone. Mathematically, the potential of the ion cores can be treated by
various models, the simplest being the nearly free electron model.

[edit] Mechanical

266
Mechanical properties of metals include ductility, which is largely due to their inherent capacity for
plastic deformation. Reversible elasticity in metals can be described by Hooke's Law for restoring forces,
where the stress is linearly proportional to the strain. Forces larger than the elastic limit, or heat, may
cause a permanent (irreversible) deformation of the object, known as plastic deformation or plasticity.
This irreversible change in atomic arrangement may occur as a result of:

 The action of an applied force (or work). An applied force may be tensile (pulling) force,
compressive (pushing) force, shear, bending or torsion (twisting) forces.

 A change in temperature (or heat). A temperature change may affect the mobility of the structural
defects such as grain boundaries, point vacancies, line and screw dislocations, stacking faults and
twins in both crystalline and non-crystalline solids. The movement or displacement of such
mobile defects is thermally activated, and thus limited by the rate of atomic diffusion.

Hot metal work from a blacksmith.

Viscous flow near grain boundaries, for example, can give rise to internal slip, creep and fatigue in
metals. It can also contribute to significant changes in the microstructure like grain growth and localized
densification due to the elimination of intergranular porosity. Screw dislocations may slip in the direction
of any lattice plane containing the dislocation, while the principal driving force for "dislocation climb" is
the movement or diffusion of vacancies through a crystal lattice.

In addition, the nondirectional nature of metallic bonding is also thought to contribute significantly to the
ductility of most metallic solids. When the planes of an ionic bond slide past one another, the resultant
change in location shifts ions of the same charge into close proximity, resulting in the cleavage of the
crystal; such shift is not observed in covalently bonded crystals where fracture and crystal fragmentation
occurs.[4]

[edit] Alloys

Main article: Alloy

An alloy is a mixture of two or more elements in solid solution in which the major component is a metal.
Most pure metals are either too soft, brittle or chemically reactive for practical use. Combining different
ratios of metals as alloys modifies the properties of pure metals to produce desirable characteristics. The

267
aim of making alloys is generally to make them less brittle, harder, resistant to corrosion, or have a more
desirable color and luster. Of all the metallic alloys in use today, the alloys of iron (steel, stainless steel,
cast iron, tool steel, alloy steel) make up the largest proportion both by quantity and commercial value.
Iron alloyed with various proportions of carbon gives low, mid and high carbon steels, with increasing
carbon levels reducing ductility and toughness. The addition of silicon will produce cast irons, while the
addition of chromium, nickel and molybdenum to carbon steels (more than 10%) results in stainless
steels.

Other significant metallic alloys are those of aluminium, titanium, copper and magnesium. Copper alloys
have been known since prehistory—bronze gave the Bronze Age its name—and have many applications
today, most importantly in electrical wiring. The alloys of the other three metals have been developed
relatively recently; due to their chemical reactivity they require electrolytic extraction processes. The
alloys of aluminium, titanium and magnesium are valued for their high strength-to-weight ratios;
magnesium can also provide electromagnetic shielding[citation needed]. These materials are ideal for situations
where high strength-to-weight ratio is more important than material cost, such as in aerospace and some
automotive applications.

Alloys specially designed for highly demanding applications, such as jet engines, may contain more than
ten elements.

[edit] Categories

[edit] Base metal

Main article: Base metal

Zinc, a base metal, reacting with an acid

In chemistry, the term base metal is used informally to refer to a metal that oxidizes or corrodes relatively
easily, and reacts variably with dilute hydrochloric acid (HCl) to form hydrogen. Examples include iron,
nickel, lead and zinc. Copper is considered a base metal as it oxidizes relatively easily, although it does
not react with HCl. It is commonly used in opposition to noble metal.

In alchemy, a base metal was a common and inexpensive metal, as opposed to precious metals, mainly
gold and silver. A longtime goal of the alchemists was the transmutation of base metals into precious
metals.

268
In numismatics, coins used to derive their value primarily from the precious metal content. Most modern
currencies are fiat currency, allowing the coins to be made of base metal.

[edit] Ferrous metal

Main article: Ferrous and non-ferrous metals

The term "ferrous" is derived from the Latin word meaning "containing iron". This can include pure iron,
such as wrought iron, or an alloy such as steel. Ferrous metals are often magnetic, but not exclusively.

[edit] Noble metal

Main article: Noble metal

Noble metals are metals that are resistant to corrosion or oxidation, unlike most base metals. They tend to
be precious metals, often due to perceived rarity. Examples include tantalum, gold, platinum, silver and
rhodium.

[edit] Precious metal

A gold nugget

Main article: Precious metal

A precious metal is a rare metallic chemical element of high economic value.

Chemically, the precious metals are less reactive than most elements, have high luster and high electrical
conductivity. Historically, precious metals were important as currency, but are now regarded mainly as
investment and industrial commodities. Gold, silver, platinum and palladium each have an ISO 4217
currency code. The best-known precious metals are gold and silver. While both have industrial uses, they
are better known for their uses in art, jewelry, and coinage. Other precious metals include the platinum
group metals: ruthenium, rhodium, palladium, osmium, iridium, and platinum, of which platinum is the
most widely traded. Plutonium and uranium could also be considered precious metals.

The demand for precious metals is driven not only by their practical use, but also by their role as
investments and a store of value. Palladium was, as of summer 2006, valued at a little under half the price

269
of gold, and platinum at around twice that of gold. Silver is substantially less expensive than these metals,
but is often traditionally considered a precious metal for its role in coinage and jewelry.

[edit] Extraction

Main articles: Ore, Mining, and Extractive metallurgy

Metals are often extracted from the Earth by means of mining, resulting in ores that are relatively rich
sources of the requisite elements. Ore is located by prospecting techniques, followed by the exploration
and examination of deposits. Mineral sources are generally divided into surface mines, which are mined
by excavation using heavy equipment, and subsurface mines.

Once the ore is mined, the metals must be extracted, usually by chemical or electrolytic reduction.
Pyrometallurgy uses high temperatures to convert ore into raw metals, while hydrometallurgy employs
aqueous chemistry for the same purpose. The methods used depend on the metal and their contaminants.

When a metal ore is an ionic compound of that metal and a non-metal, the ore must usually be smelted —
heated with a reducing agent — to extract the pure metal. Many common metals, such as iron, are smelted
using carbon as a reducing agent. Some metals, such as aluminium and sodium, have no commercially
practical reducing agent, and are extracted using electrolysis instead.[5]

Sulfide ores are not reduced directly to the metal but are roasted in air to convert them to oxides.

[edit] Metallurgy

Main article: Metallurgy

Metallurgy is a domain of materials science that studies the physical and chemical behavior of metallic
elements, their intermetallic compounds, and their mixtures, which are called alloys.

[edit] Applications

Some metals and metal alloys possess high structural strength per unit mass, making them useful
materials for carrying large loads or resisting impact damage. Metal alloys can be engineered to have high
resistance to shear, torque and deformation. However the same metal can also be vulnerable to fatigue
damage through repeated use or from sudden stress failure when a load capacity is exceeded. The strength
and resilience of metals has led to their frequent use in high-rise building and bridge construction, as well
as most vehicles, many appliances, tools, pipes, non-illuminated signs and railroad tracks.

The two most commonly used structural metals, iron and aluminium, are also the most abundant metals in
the Earth's crust.[6]

Metals are good conductors, making them valuable in electrical appliances and for carrying an electric
current over a distance with little energy lost. Electrical power grids rely on metal cables to distribute
electricity. Home electrical systems, for the most part, are wired with copper wire for its good conducting
properties.

The thermal conductivity of metal is useful for containers to heat materials over a flame. Metal is also
used for heat sinks to protect sensitive equipment from overheating.

270
The high reflectivity of some metals is important in the construction of mirrors, including precision
astronomical instruments. This last property can also make metallic jewelry aesthetically appealing.

Some metals have specialized uses; radioactive metals such as uranium and plutonium are used in nuclear
power plants to produce energy via nuclear fission. Mercury is a liquid at room temperature and is used in
switches to complete a circuit when it flows over the switch contacts. Shape memory alloy is used for
applications such as pipes, fasteners and vascular stents.

Copper

From Wikipedia, the free encyclopedia

Jump to: navigation, search

For other uses, see Copper (disambiguation).

nickel ← copper → zinc

-

Cu

Ag

29Cu

Periodic table

Appearance

red-orange metallic luster

271
Native copper (~4 cm in size)

General properties

Name, symbol, number copper, Cu, 29

Pronunciation /ˈkɒpər/ KOP-ər

Element category transition metal

Group, period, block 11, 4, d

Standard atomic weight 63.546g·mol−1

Electron configuration [Ar] 3d10 4s1

Electrons per shell 2, 8, 18, 1 (Image)

Physical properties

Phase solid

Density (near r.t.) 8.94 g·cm−3

Liquid density at m.p. 8.02 g·cm−3

Melting point 1357.77 K1084.62 ° ,C1984.32 ° ,F

Boiling point 2835 K2562 ° ,C4643 ° ,F

272
Heat of fusion 13.26 kJ·mol−1

Heat of vaporization 300.4 kJ·mol−1

Specific heat capacity (25 °C) 24.440 J·mol−1·K−1

Vapor pressure

P (Pa) 1 10 100 1k 10 k 100 k

at T (K) 1509 1661 1850 2089 2404 2834

Atomic properties

+1, +2, +3, +4


Oxidation states
(mildly basic oxide)

Electronegativity 1.90 (Pauling scale)

Ionization energies 1st: 745.5 kJ·mol−1


(more)
2nd: 1957.9 kJ·mol−1

3rd: 3555 kJ·mol−1

Atomic radius 128 pm

Covalent radius 132±4 pm

Van der Waals radius 140 pm

Miscellanea

Crystal structure face-centered cubic

Magnetic ordering diamagnetic

Electrical resistivity (20 °C) 16.78 nΩ·m

Thermal conductivity (300 K) 401 W·m−1·K−1

273
Thermal expansion (25 °C) 16.5 µm·m−1·K−1

(r.t.) (annealed)
Speed of sound (thin rod)
3810 m·s−1

Young's modulus 110–128 GPa

Shear modulus 48 GPa

Bulk modulus 140 GPa

Poisson ratio 0.34

Mohs hardness 3.0

Vickers hardness 369 MPa

Brinell hardness 874 MPa

CAS registry number 7440-50-8

Most stable isotopes

Main article: Isotopes of copper

iso NA half-life DM DE (MeV) DP


63 63
Cu 69.15% Cu is stable with 34 neutrons

65 65
Cu 30.85% Cu is stable with 36 neutrons

v·d·e

Copper ( /ˈkɒpər/ KOP-ər) is a chemical element with the symbol Cu (Latin: cuprum) and atomic
number 29. It is a ductile metal, with very high thermal and electrical conductivity. Pure copper is rather
soft and malleable, and a freshly exposed surface has a reddish-orange color. It is used as a thermal
conductor, an electrical conductor, a building material, and a constituent of various metal alloys.

274
A copper disc (99.95% pure) made by continuous casting and etching.

Copper metal and alloys have been used for thousands of years. In the Roman era, copper was principally
mined on Cyprus, hence the origin of the name of the metal as Cyprium, "metal of Cyprus", later
shortened to Cuprum.

Like many other metals, copper is easily recyclable. However, the fraction of copper in active use is
steadily increasing, and its total quantity available on Earth may be barely sufficient to allow all countries
to reach developed world levels of copper usage.[1] Some countries, such as Chile and the United States,
still have sizeable reserves of unmined metal which are extracted through large open pit mines.

Copper compounds are commonly encountered as salts of Cu2+, which often impart blue or green colors
to minerals such as turquoise and have been widely used historically as pigments. Copper metal
architectural structures and statuary eventually corrode to acquire a characteristic green patina. Copper as
both metal and pigmented salt, has a significant presence in decorative art.

Copper(II) ions (Cu2+) are soluble in water, where they function at low concentration as bacteriostatic
substances, fungicides, and wood preservatives. In sufficient amounts, copper salts can be poisonous to
higher organisms as well. However, despite universal toxicity at high concentrations, the Cu2+ ion at
lower concentrations is an essential trace nutrient to all higher plant and animal life. In animals, including
humans, it is found widely in tissues, with concentration in liver, muscle, and bone. It functions as a co-
factor in various enzymes and in copper-based pigments.

Contents

[hide]

 1 Characteristics
o 1.1 Physical properties
o 1.2 Electrical properties
o 1.3 Chemical characteristics
o 1.4 Occurrence
o 1.5 Isotopes
 2 Compounds
o 2.1 Copper(I)
o 2.2 Copper(II)
o 2.3 Copper(III) and copper(IV)
 3 History

275
o 3.1 Copper Age
o 3.2 Bronze Age
o 3.3 Antiquity and Middle Ages
o 3.4 Modern period
 4 Production
o 4.1 Reserves
o 4.2 Methods
o 4.3 Recycling
 5 Applications
o 5.1 Electronics and related devices
o 5.2 Architecture and industry
o 5.3 Applications of copper compounds
o 5.4 Biomedical applications
o 5.5 Aquaculture applications
o 5.6 Miscellaneous uses
 6 Alloys
 7 Biological role
o 7.1 Copper essentiality
o 7.2 Copper excess and deficiency
o 7.3 Toxicity and precautions
o 7.4 Antibacterial properties
 8 See also
 9 References
 10 Further reading
 11 Notes
 12 External links

Characteristics

Physical properties

Copper occupies the same family of the periodic table as silver and gold, since they each have one s-
orbital electron on top of a filled electron shell which forms metallic bonds. Like silver and gold, copper
is easily worked, being both ductile and malleable. The ease with which it can be drawn into wire makes
it useful for electrical work as does its excellent electrical conductivity. Copper is normally supplied, as
with nearly all metals for industrial and commercial use, in a fine grained polycrystalline form.
Polycrystalline metals have greater strength than monocrystalline forms, and the difference is greater for
smaller grain (crystal) sizes.[2]

276
Copper just above its melting point keeps its pink luster color when enough light outshines the orange
incandescence color.

Comparison between unoxidized copper wire (left) and normal oxidized copper (right).

Copper has a reddish, orangish, or brownish color owing to a thin layer of tarnish (including oxides). Pure
copper, is pink- or peach-coloured. Copper, osmium (blueish) and gold (yellow) are the only three
elemental metals with a natural color other than gray or silver.[3] Copper's characteristic color results from
its band structure: copper is the exception to Madelung's rule, with only one electron in the 4s subshell
instead of two. The energy of a photon of blue or violet light is sufficient for a d band electron to absorb it
and transition to the half-full s band. Thus, the light reflected by copper is missing some blue/violet
components and appears red. This phenomenon is exhibited by gold which has a corresponding 5s/4d
structure.[4] Liquid copper appears somewhat greenish, a characteristic shared with gold in the absence of
bright ambient light.

Electrical properties

277
Copper electrical busbars distributing power to a large building

The similarity in electron structure makes copper, silver, and gold similar in many ways: All three have
high thermal and electrical conductivities, and all three are malleable. Among pure metals at room
temperature, copper has the second highest electrical and thermal conductivity, after silver.[5] At
59.6×106 S/m copper has the second highest electrical conductivity of any element, just after silver. This
high value is due to virtually all the valence electrons (one per atom) taking part in conduction. The
resulting free electrons in the copper amount to a huge charge density of 13.6×109 C/m3. This high charge
density is responsible for the rather slow drift velocity of currents in copper cable (drift velocity may be
calculated as the ratio of current density to charge density). For instance, at a current density of
5×106 A/m2 (typically, the maximum current density present in household wiring and grid distribution)
the drift velocity is just a little over ⅓ mm/s.[6]

Chemical characteristics

Main article: Galvanic corrosion

In direct mechanical contact with metals of different electropotential (for example, a copper pipe joined to
an iron pipe), especially in the presence of moisture, the completion of an electrical circuit (for instance
through the common ground) will cause the juncture to act as an electrochemical cell (like a single cell of
a battery). The weak electrical currents themselves are harmless but the electrochemical reaction will
cause the conversion of the iron to other compounds, eventually destroying the functionality of the union.

During the late 20th century in the United States, the temporary popularity of aluminium for household
electrical wiring resulted in many homes having a combination of copper and aluminium wiring
necessitating electrical contact (and therefore physical contact) between the two metals. Some issues were
experienced by homeowners and housing contractors.

Copper does not react with water, but it slowly reacts with atmospheric oxygen forming a layer of brown-
black copper oxide. In contrast to the oxidation of iron by wet air, this oxide layer stops the further, bulk

278
corrosion. A green layer of copper carbonate, called verdigris, can often be seen on old copper
constructions, such as the Statue of Liberty.

Copper reacts with hydrogen sulfide- and sulfide-containing solutions, forming various copper sulfides on
its surface. In sulfide-containing solutions, copper is less noble than hydrogen and will corrode. This is
observed in everyday life when copper metal surfaces tarnish after exposure to air containing sulfur
compounds.

Copper is slowly dissolved in oxygen-containing ammonia solutions because ammonia forms water-
soluble complexes with copper. Copper reacts with a combination of oxygen and hydrochloric acid to
form a series of copper chlorides. Copper(II) chloride (green/blue) when boiled with copper metal
undergoes a comproportionation reaction to form white copper(I) chloride.

Copper reacts with an acidified mixture of hydrogen peroxide to form the corresponding copper salt:

Cu + 2 HCl + H2O2 → CuCl2 + 2 H2O

Occurrence

Crystals of native copper.

Copper can be found as native copper in mineral form (for example, in Michigan's Keweenaw Peninsula).
It is a polycrystal, with the largest single crystals measuring 4.4×3.2×3.2 cm.[7] Minerals such as the
sulfides: chalcopyrite (CuFeS2), bornite (Cu5FeS4), covellite (CuS), chalcocite (Cu2S) are sources of
copper, as are the carbonates: azurite (Cu3(CO3)2(OH)2) and malachite (Cu2CO3(OH)2) and the oxide:
cuprite (Cu2O).[5]

Copper is found in a variety of enzymes and proteins, including the cytochrome c oxidase and certain
superoxide dismutases. Copper is used for biological electron transport, e.g. the blue copper proteins
azurin and plastocyanin. The name "blue copper" comes from their intense blue color arising from a
ligand-to-metal charge transfer (LMCT) absorption band around 600 nm. Most molluscs and some
arthropods such as the horseshoe crab use the copper-containing pigment hemocyanin rather than iron-
containing hemoglobin for oxygen transport, so their blood is blue when oxygenated rather than red.[8]

Isotopes

Main article: Isotopes of copper

279
Copper has 29 distinct isotopes ranging in atomic mass from 52 to 80. Two of these, 63Cu and 65Cu, are
stable and occur naturally, with 63Cu comprising approximately 69% of naturally occurring copper. They
both have nuclear spin of 3/2.[9] The other 27 isotopes are radioactive and do not occur naturally. The
most stable of these is 67Cu with a half-life of 61.83 hours.[9]

Compounds

Copper(I) oxide powder

See also: Category:Copper compounds

Most compounds of copper adopt oxidation states copper(I) and copper(II), which are often called
cuprous and cupric, respectively.

Copper(I)

Copper(I) is that main form of copper encountered in its ores. The cuprous halides except the fluoride are
well known: CuCl, CuBr, CuI. Sugars are sometimes detected by their ability to convert blue copper(II)
complexes to reddish copper(I) oxide (Cu2O), e.g. Benedict's reagent.

Copper(II)

Copper(II) is more commonly encountered in everyday life. Copper(II) carbonate is the green tarnish that
gives the unique appearance of copper-clad roofs or domes on older buildings. Copper(II) sulfate forms a
blue crystalline pentahydrate which is perhaps the most familiar copper compound in the laboratory. It is
used as a fungicide, known as Bordeaux mixture.[10]

Adding an aqueous solution of sodium hydroxide will cause the precipitation of blue solid copper(II)
hydroxide. A simplified equation is:

280
Cu2+ + 2 OH− → Cu(OH)2

A fuller equation shows that the reaction involves two hydroxide ions deprotonating the
hexaaquacopper(II) complex:

[Cu(H2O)6]2+ + 2 OH− → Cu(H2O)4(OH)2 + 2 H2O

An aqueous ammonia causes the same precipitate to form. Upon adding excess ammonia, the precipitate
dissolves, forming a deep blue ammonia complex, tetraamminecopper(II):

Cu(H2O)4(OH)2 + 4 NH3 → [Cu(H2O)2(NH3)4]2+ + 2 H2O + 2 OH−

This compound was once important in the processing of cellulose.

Other well-known copper(II) compounds include copper(II) acetate, copper(II) carbonate, copper(II)
chloride, copper(II) nitrate, and copper(II) oxide. Many tests for copper ions exist, one involving
potassium ferrocyanide, which gives a brown precipitate with copper salts.

Copper(III) and copper(IV)

A representative copper(III) complex is [CuF6]3-. Copper(III) compounds are uncommon but are involved
in a variety of reactions in bioinorganic chemistry and homogeneous catalysis. The cuprate
superconductors contain copper(III), e.g. YBa2Cu3O7-δ. Compounds of copper(IV) are extremely rare,
examples are the salts of [CuF6]2-.

History

Copper Age

Main article: Copper Age

Copper, as native copper, is one of the few metals to occur naturally as an un-compounded mineral.
Copper was known to some of the oldest civilizations on record, and has a history of use that is at least
10,000 years old. Some estimates of copper's discovery place this event around 9000 BC in the Middle
East.[11] A copper pendant was found in what is now northern Iraq that dates to 8700 BC.[12] It is probable
that gold and meteoritic iron were the only metals used by humans before copper.[13] By 5000 BC, there
are signs of copper smelting: the refining of copper from simple copper compounds such as malachite or
azurite. Among archaeological sites in Anatolia, Çatal Höyük (~6000 BC) features native copper artifacts
and smelted lead beads, but no smelted copper. Can Hasan (~5000 BC) had access to smelted copper but
the oldest smelted copper artifact found (a copper chisel from the chalcolithic site of Prokuplje in Serbia)
has pre-dated Can Hasan by 500 years. The smelting facilities in the Balkans appear to be more advanced
than the Anatolian forges found at a later date, so it is quite probable that copper smelting originated in
the Balkans. Investment casting was realized in 4500–4000 BC in Southeast Asia.[11] Carbon dates have
established mining at around 2280 to 1890 BC at Alderley Edge in Cheshire, UK.[14]

281
Corroded copper ingot from Zakros, Crete is shaped in the form of an animal skin typical for that era.

Copper smelting appears to have been developed independently in several parts of the world. In addition
to its development in the Balkans by 5500 BC, it was developed in China before 2800 BC, in the Andes
around 2000 BC, in Central America around 600 AD, and in West Africa around 900 AD. [15] Copper is
found extensively in the Indus Valley Civilization by the 3rd millennium BC. In Europe, Ötzi the Iceman,
a well-preserved male dated to 3300–3200 BC, was found with an axe with a copper head 99.7% pure.
High levels of arsenic in his hair suggest he was involved in copper smelting. Over the course of
centuries, experience with copper has assisted the development of other metals; for example, knowledge
of copper smelting led to the discovery of iron smelting.

In the Americas production in the Old Copper Complex, located in present day Michigan and Wisconsin,
was dated to between 6000 to 3000 BC.[16][17]

Some reports[which?] claim that ancient American civilizations, such as the Mound Builders knew of a
method of tempering copper which has not yet been rediscovered. According to historian Gerard Fowke,
there is no evidence of any such "lost art", and the best technique demonstrated for strengthening copper
in this era was hammering.[18]

Bronze Age

Main article: Bronze Age

Alloying of copper with zinc or tin to make brass or bronze was practiced soon after the discovery of
copper itself. Copper and bronze artifacts from Sumerian cities date to 3000 BC,[19] and Egyptian artifacts
of copper and copper-tin alloys nearly as old. The use of bronze became so widespread in Europe
approximately from 2500 BC to 600 BC that it has been named the Bronze Age. The transitional period in
certain regions between the preceding Neolithic period and the Bronze Age is termed the Chalcolithic
("copper-stone"), with some high-purity copper tools being used alongside stone tools. Brass (copper-zinc
alloy) was known to the Greeks, but only became a significant supplement to bronze during the Roman
empire.[19]

Antiquity and Middle Ages

282
In alchemy the symbol for copper, perhaps a stylized mirror, was also the symbol for the goddess and
planet Venus.

Chalcolithic copper mine in Timna Valley, Negev Desert, Israel.

In Greek, the metal was known by the name chalkos (χαλκός). Copper was a very important resource for
the Romans, Greeks and other ancient peoples. In Roman times, it became known as aes Cyprium (aes
being the generic Latin term for copper alloys such as bronze and other metals, and Cyprium because so
much of it was mined in Cyprus). From this, the phrase was simplified to cuprum, hence the English
copper. Copper was associated with the goddess Aphrodite/Venus in mythology and alchemy, owing to
its lustrous beauty, its ancient use in producing mirrors, and its association with Cyprus, which was sacred
to the goddess. In astrology and alchemy the seven heavenly bodies known to the ancients were
associated with seven metals also known in antiquity, and Venus was assigned to copper.[20]

Britain's first use of brass occurred around the 3rd–2nd century BC. In North America, copper mining
began with marginal workings by Native Americans. Native copper is known to have been extracted from
sites on Isle Royale with primitive stone tools between 800 and 1600.[21]

Copper metallurgy was flourishing in South America, particularly in Peru around the beginning of the
first millennium AD. Copper technology proceeded at a much slower rate on other continents. Africa's
major location for copper reserves is Zambia. Copper burial ornamentals dated from the 15th century
have been uncovered, but the metal's commercial production did not start until the early 20th century.
Australian copper artifacts exist, but they appear only after the arrival of the Europeans; the aboriginal
culture apparently did not develop their own metallurgical abilities.

Crucial in the metallurgical and technological worlds, copper has also played an important cultural role,
particularly in currency. Romans in the 6th through 3rd centuries BC used copper lumps as money. At
first, just the copper itself was valued, but gradually the shape and look of the copper became more
important. Julius Caesar had his own coins, made from a copper-zinc alloy, while Octavianus Augustus
Caesar's coins were made from Cu-Pb-Sn alloys. With an estimated annual output of around 15,000 t,
Roman copper mining and smelting activities reached a scale unsurpassed until the time of the Industrial
Revolution; the provinces most intensely mined were those of Hispania, Cyprus and in Central
Europe.[22][23]

283
The gates of the Temple of Jerusalem used Corinthian bronze made by depletion gilding. Corinthian
bronze was most prevalent in Alexandria, where alchemy is thought to have begun.[24] In ancient India
(before 1000 BC), copper was used in the holistic medical science Ayurveda for surgical instruments and
other medical equipment. Ancient Egyptians (~2400 BC) used copper for sterilizing wounds and drinking
water, and as time passed, (~1500 BC) for headaches, burns, and itching. Hippocrates (~400 BC) used
copper to treat leg ulcers associated with varicose veins. Ancient Aztecs fought sore throats by gargling
with copper mixtures.

Copper is also the part of many rich stories and legends, such as that of Iraq's Baghdad Battery. Copper
cylinders soldered to lead, which date back to 248 BC to 226 AD, resemble a galvanic cell, leading
people to believe this may have been the first battery. This claim has so far not been substantiated.

The Bible also refers to the importance of copper: "Men know how to mine silver and refine gold, to dig
iron from the earth and melt copper from stone" (Job 28:1–2).

Modern period

A copper saturated stream running from the disused Parys Mountain mines

The Great Copper Mountain was a mine in Falun, Sweden, that operated for a millennium from the 10th
century to 1992. It produced as much as two thirds of Europe's copper needs in the 17th century and
helped fund many of Sweden's wars during that time.[25] It was referred to as the nation's treasury; Sweden
had a copper backed currency.

Throughout history, copper's use in art has extended far beyond currency. It was used by Renaissance
sculptors, in pre-photographic technology known as the daguerreotype, and the Statue of Liberty. Copper
plating and Copper sheathing for ships' hulls was widespread. The ships of Christopher Columbus were
among the earliest to have this protection.[26] The Norddeutsche Affinerie in Hamburg was the first
modern electroplating plant starting its production in 1876.[27] The German scientist Gottfried Osann
invented powder metallurgy of copper in 1830 while determining the metal's atomic mass. Around then it
was also discovered that the amount and type of alloying element (e.g. tin) would affect the tones of bells,
leading to bell casting. Flash smelting was developed by Outokumpu in Finland and first applied at the
Harjavalta plant in 1949. The energy-efficient process accounts for 50% of the world’s primary copper
production.[28]

284
Copper has been pivotal in the economic and sociological worlds, notably disputes involving copper
mines. The 1906 Cananea Strike in Mexico dealt with issues of work organization. The Teniente copper
mine (1904–1951) raised political issues about capitalism and class structure. Japan's largest copper mine,
the Ashio mine, was the site of a riot in 1907. The Arizona miners' strike of 1938 dealt with American
labor issues including the "right to strike".

Production

Chuquicamata is one of the world's largest open pit copper mines.

Copper output in 2005

Most copper ore is mined or extracted as copper sulfides from large open pit mines in porphyry copper
deposits that contain 0.4 to 1.0% copper. Examples include: Chuquicamata in Chile, Bingham Canyon
Mine in Utah and El Chino Mine in New Mexico, US. The average abundance of copper found within
crustal rocks is approximately 68 ppm by mass, and 22 ppm by atoms. In 2005, Chile was the top mine
producer of copper with at least one-third world share followed by the USA, Indonesia and Peru, reports
the British Geological Survey.[5]

Reserves

285
World production trend

Copper prices 2003–2008 in USD per tonne

Copper has been in use at least 10,000 years, but more than 95% of all copper ever mined and smelted has
been extracted since 1900. As with many natural resources, the total amount of copper on Earth is vast
(around 1014 tons just in the top kilometer of Earth's crust, or about 5 million years worth at the current
rate of extraction). However, only a tiny fraction of these reserves is economically viable, given present-
day prices and technologies. Various estimates of existing copper reserves available for mining vary from
25 years to 60 years, depending on core assumptions such as the growth rate.[29]

Recycling is a major source of copper in the modern world.[30] Because of these and other factors, the
future of copper production and supply is the subject of much debate, including the concept of Peak
copper, analogue to Peak Oil.

The copper price, one measure of the availability of supply versus worldwide demand, has quintupled
from the 60-year low in 1999, rising from US$0.60 per pound (US$1.32/kg) in June 1999 to US$3.75 per
pound (US$8.27/kg) in May 2006, where it dropped to US$2.40 per pound (US$5.29/kg) in February
2007 then rebounded to US$3.50 per pound (US$7.71/kg = £3.89 = €5.00) in April 2007.[31] By early
February 2009, however, weakening global demand and a steep fall in commodity prices since the
previous year's highs had left copper prices at US$1.51 per pound.[32]

The Intergovernmental Council of Copper Exporting Countries (CIPEC), defunct since 1992, once tried
to play a similar role for copper as OPEC does for oil, but never achieved the same influence, not least
because the second-largest producer, the United States, was never a member. Formed in 1967, its
principal members were Chile, Peru, Zaire, and Zambia.

Methods

Main article: Copper extraction techniques

Recycling

Copper is 100% recyclable without any loss of quality whether in a raw state or contained in a
manufactured product. Copper is the third most recycled metal after iron and aluminium. It is estimated
that 80% of the copper ever mined is still in use today.[33]

286
High purity copper scrap is directly melted in a furnace and the molten copper is deoxidized and cast into
billets, or ingots. Lower purity scrap is usually refined to attain the desired purity level by an
electroplating process in which the copper scrap is dissolved into a bath of sulfuric acid and then
electroplated out of the solution.[34]

Applications

About 98% of all copper is used as the metal, taking advantage of distinctive physical properties - being
malleable and ductile, a good conductor of both heat and electricity, and being resistant to corrosion.

The purity of copper is expressed as 4N for 99.99% pure or 7N for 99.99999% pure. The numeral gives
the number of nines after the decimal point when expressed as a decimal (e.g. 4N means 0.9999, or
99.99%). Copper is often too soft for its applications, so it is incorporated in numerous alloys. For
example, brass is a copper-zinc alloy, and bronze is a copper-tin alloy.[35]

Copper can be machined, although it is usually necessary to use an alloy for intricate parts, such as
threaded components, to get good machinability characteristics. Good thermal conduction makes it useful
for heatsinks and in heat exchangers.

Assorted copper fittings

It is widely used in piping for water supplies, refrigeration and air conditioning.

Electronics and related devices

Its electrical properties are exploited in its use as Copper wire, electromagnets, electrical relays, busbars
and switches. Integrated circuits, as well as Printed circuit boards increasingly feature copper in place of
aluminium because of its superior electrical conductivity. As a material in the manufacture of computer
heat sinks, as a result of its superior heat dissipation capacity to aluminium. Vacuum tubes, cathode ray
tubes, and the magnetrons in microwave ovens use copper, as do wave guides for microwave radiation.

287
Copper roof on the Minneapolis City Hall, coated with patina

Architecture and industry

 While electrical applications use oxygen-free copper, unalloyed copper used in architectural
applications is the lower-purity Phosphorus Deoxidized Copper (also called Cu-DHP).[36]
 Copper has been used as water-proof roofing material since ancient times, giving many old
buildings their greenish roofs and domes. Initially copper oxide forms, replaced by cuprous and
cupric sulfide, and finally by copper carbonate. The final copper sulfate patina (termed verdigris)
is highly resistant to corrosion.[37]
 Statuary: The Statue of Liberty, for example, contains 179,220 pounds (81.29 tonnes) of copper.
 Alloyed with nickel, e.g. cupronickel and Monel, used as corrosive resistant materials in
shipbuilding.
 Watt's steam engine firebox due to superior heat dissipation.
 Copper compounds in liquid form are used as a wood preservative, particularly in treating
original portion of structures during restoration of damage due to dry rot.
 Copper wires may be placed over non-conductive roofing materials to discourage the growth of
moss. (Zinc may also be used for this purpose.)

288
Old copper utensils in a Jerusalem restaurant

 Copper is used to prevent a building being directly struck by lightning. High above the roof,
copper spikes (lightning rods) are connected to a very thick copper cable which leads to a large
metal plate underneath the ground. The electric current is dispersed throughout the ground
harmlessly, instead of destroying the main structure.[38]

Lead free solder, alloyed with tin.

Copper has good corrosion resistance, but not as good as gold. It has excellent brazing and soldering
properties and can also be welded, although best results are obtained with gas metal arc welding.[39]

Applications of copper compounds

About 2% of the copper production is diverted for the production of compounds. The main applications
are for nutritional supplements and fungicides in agriculture.[10]

Biomedical applications

 As a biostatic surface in hospitals, and to line parts of ships to protect against barnacles and
mussels, originally used pure, but superseded by Muntz metal. Bacteria will not grow on a copper
surface because it is biostatic. Copper doorknobs are used by hospitals to reduce the transfer of
disease, and Legionnaires' disease is suppressed by copper tubing in air-conditioning systems.
 Copper(II) sulfate is used as a fungicide and as algae control in domestic lakes and ponds. It is
used in gardening powders and sprays to kill mildew.[10]
 Copper-62-PTSM, a complex containing radioactive copper-62, is used as a positron emission
tomography radiotracer for heart blood flow measurements.

289
 Copper-64 can be used as a positron emission tomography radiotracer for medical imaging. When
complexed with a chelate it can be used to treat cancer through radiation therapy.

Aquaculture applications

Main article: Copper alloys in aquaculture

Copper alloys have become important netting materials in the aquaculture industry. What sets copper
alloys apart from other materials is that copper alloys are antimicrobial. In the marine environment, the
antimicrobial/algaecidal properties of copper alloys prevent biofouling. In addition to their antifouling
benefits, copper alloys have strong structural and corrosion-resistant properties in marine environments. It
is the combination of all of these properties – antifouling, high strength, and corrosion resistance – that
has made copper alloys a desirable material for netting and structural materials in commercial large-scale
fish farming operations.

Miscellaneous uses

 As a component in ceramic glazes, and to color glass.


 Musical instruments, especially brass instruments and timpani.
 Class D fire extinguisher, used in powder form to extinguish lithium fires by covering the burning
metal and acting as a heat sink.
 Textile fibers to create antimicrobial protective fabrics.[40]
 Weaponry
o Small arms ammunition commonly uses copper as a jacketing material around the bullet
core.
o Copper is also commonly used as a case material, in the form of brass.
o Copper is used as a liner in shaped charge armor-piercing warheads and demolition
explosives (blade).
 Copper is frequently used in electroplating, usually as a base for other metals such as nickel.
 Copper can also be used for jewelry, most frequently in bracelets. Folklore states that copper
bracelets relieve arthritis symptoms, though this is not proven.

Alloys

See also: List of copper alloys

Numerous copper alloys exist, many with important historical and contemporary uses. Speculum metal
and bronze are alloys of copper and tin. Brass is an alloy of copper and zinc. Monel metal, also called
cupronickel, is an alloy of copper and nickel. While the metal bronze usually refers to copper-tin alloys, it
also is a generic term for any alloy of copper, such as aluminium bronze, silicon bronze, and manganese
bronze. Copper is one of the most important constituents of carat silver and gold alloys and carat solders
used in the jewelry industry, modifying the color, hardness and melting point of the resulting alloys.[41]

Oxygen-free pure copper can be alloyed with phosphorus to better withstand oxidizing conditions. This
alloy has an application as thick corrosion-resistant overpack for spent nuclear fuel disposal in deep
crystalline rocks.[42][Full citation needed]

Copper alloys are metal alloys with copper as their principal component. They have high resistance
against corrosion. The best known traditional types are bronze, where tin is a significant addition, and

290
brass, using zinc instead. Both of these are rather imprecise terms, for which "copper alloy" today tends to
be substituted, especially by museums.

[edit] Compositions

The similarity in external appearance of the various alloys, along with the different combinations of
elements used when making each alloy, can lead to confusion when categorizing the different
compositions. There are as many as 400 different copper and copper-alloy compositions loosely grouped
into the categories: copper, high copper alloy, brasses, bronzes, copper nickels, copper–nickel–zinc
(nickel silver), leaded copper, and special alloys. The following table lists the principal alloying element
for four of the more common types used in modern industry, along with the name for each type. Historical
types, such as those which characterize the Bronze Age, are vaguer as the mixtures were generally
variable.

Classification of copper and its alloys

Family Principal alloying element UNS numbers

Copper alloys, brass Zinc (Zn) C1xxxx–C4xxxx,C66400–C69800

Phosphor bronze Tin (Sn) C5xxxx

Aluminium bronzes Aluminium (Al) C60600–C64200

Silicon bronzes Silicon (Si) C64700–C66100

Copper nickel, nickel silvers Nickel (Ni) C7xxxx

Mechanical properties of common copper alloys[1]

Yield
Nominal strength Tensile Elongation Hardness
Form and
Name composition (0.2% strength in 2 inches (Brinell Comments
condition
(percentages) offset, (ksi) (percent) scale)
ksi)

Copper (ASTM Electrical


B1, B2, B3, equipment,
Cu 99.9 Annealed 10 32 45 42
B152, B124, roofing,
R133) screens

Cold-
" " 40 45 15 90 "
drawn

Cold-
" " 40 46 5 100 "
rolled

291
Gilding metal Cold- Coins, bullet
Cu 95.0, Zn 5.0 50 56 5 114
(ASTM B36) rolled jackets

Good for
cold-
working;
Cartridge brass
radiators,
(ASTM B14, Cu 70.0, Zn Cold-
63 76 8 155 hardware,
B19, B36, B134, 30.0 rolled
electrical,
B135)
drawn
cartridge
cases.

High fatigue-
Phosphor bronze
Cu 70.0, Sn Spring strength and
(ASTM B103, — 122 4 241
10.0, P 0.25 temper spring
B139, B159)
qualities

Yellow or High
Good
brass (ASTM Cu 65.0, Zn
Annealed 18 48 60 55 corrosion
B36, B134, 35.0
resistance
B135)

Cold-
" " 55 70 15 115 "
drawn

Cold-
" " rolled 60 74 10 180 "
(HT)

Manganese Cu 58.5, Zn
bronze (ASTM 39.2, Fe 1.0, Sn Annealed 30 60 30 95 Forgings
138) 1.0, Mn 0.3

Cold-
" " 50 80 20 180 "
drawn

Naval brass Cu 60.0, Zn Resistance to


Annealed 22 56 40 90
(ASTM B21) 39.25, Sn 0.75 salt corrosion

Cold-
" " 40 65 35 150 "
drawn

Muntz metal Cu 60.0, Zn Annealed 20 54 45 80 Condensor

292
(ASTM B111) 40.0 tubes

Aluminium
bronze (ASTM
Cu 92.0, Al 8.0 Annealed 25 70 60 80 —
B169 alloy A,
B124, B150)

" " Hard 65 105 7 210 "

Beryllium
Cu 97.75, Be Annealed, Electrical,
copper (ASTM B60
2.0, Co or Ni solution- 32 70 45 valves,
B194, B196, (Rockwell)
0.25 treated pumps
B197)

Cold- B81
" " 104 110 5 "
rolled (Rockwell)

Free-cutting Cu 62.0, Zn Cold- B80 Screws, nuts,


44 70 18
brass 35.5, Pb 2.5 drawn (Rockwell) gears, keys

Nickel silver Cu 65.0, Zn


Annealed 25 58 40 70 Hardware
(ASTM B112) 17.0, Ni 18.0

Cold-
" " 70 85 4 170 "
rolled

Easy to
Cu 76.5, Ni
Nickel silver machine;
12.5, Pb 9.0, Sn Cast 18 35 15 55
(ASTM B149) ornaments,
2.0
plumbing

Cupronickel Cu 88.35, Ni Condensor,


(ASTM B111, 10.0, Fe 1.25, Annealed 22 44 45 – salt-water
B171) Mn 0.4 pipes

Cold-
" " drawn 57 60 15 – "
tube

Heat-
Cu 70.0, Ni exchange
Cupronickel Wrought – – – –
30.0 equipment,
valves

293
Ounce metal[2]
Copper Alloy
C83600 (also
Cu 85.0, Zn
known as "Red
5.0, Pb 5.0, Sn Cast 17 37 25 60 —
brass" or
5.0
"composition
metal") (ASTM
B62)

Varies Cu 80-
90%, Zn <5%,
Gunmetal
Sn ~10%,
(known as "red
+other
brass" in US)
elements@
<1%

Mechanical properties of Copper Development Association (CDA) copper alloys[3]

Tensile Yield Hardness


strength [ksi] strength [ksi] Elongation [Brinell Machinability
Family CDA
(typ.) [%] 10 mm- [YB = 100]
Min. Typ. Min. Typ. 500 kg]

833 32 10 35 35 35

Red brass 836 30 37 14 17 30 50–65 84

838 29 35 12 16 25 50–60 90

844 29 34 13 15 26 50–60 90
Semi-red brass
848 25 36 12 14 30 50–60 90

862 90 95 45 48 20 170–195† 30
Manganese
863 110 119 60 83 18 225† 8
bronze
865 65 71 25 28 30 130† 26

903 40 45 18 21 30 60–75 30

Tin bronze 905 40 45 18 22 25 75 30

907 35 44 18 22 20 80 20

294
922 34 40 16 20 30 60–72 42

923 36 40 16 20 25 60–75 42
Leaded tin
bronze
926 40 44 18 20 30 65–80 40

927 35 42 21 20 77 45

932 30 35 14 18 20 60–70 70

934 25 32 16 20 55–65 70

935 25 32 12 16 30 55–65 70
High-leaded tin
bronze
937 25 35 12 18 20 55–70 80

938 25 30 14 16 18 50–60 80

943 21 27 13 10 42–55 80

952 65 80 25 27 35 110–140† 50

953 65 75 25 27 25 140† 55
Aluminium
954 75 85 30 35 18 140–170† 60
bronze
955 90 100 40 44 12 180–200† 50

958 85 95 35 38 25 150-170† 50

Silicon bronze 878 80 83 30 37 29 115 40


Brinell scale with 3000 kg load

Comparison of copper alloy standards[3]

Family CDA ASTM SAE SAE superseded Federal Military

833

Red brass 836 B145-836 836 40 QQ-C-390 (B5) C-2229 Gr2

838 B145-838 838 QQ-C-390 (B4)

Semi-red brass 844 B145-844 QQ-C-390 (B2)

295
848 B145-848 QQ-C-390 (B1)

862 B147-862 862 430A QQ-C-390 (C4) C-2229 Gr9

Manganese bronze 863 B147-863 863 430B QQ-C-390 (C7) C-2229 Gr8

865 B147-865 865 43 QQ-C-390 (C3) C-2229 Gr7

903 B143-903 903 620 QQ-C-390 (D5) C-2229 Gr1

Tin bronze 905 B143-905 905 62 QQ-C-390 (D6)

907 907 65

922 B143-922 922 622 QQ-C-390 (D4) B-16541

923 B143-923 923 621 QQ-C-390 (D3) C-15345 Gr10


Leaded tin bronze
926 926

927 927 63

932 B144-932 932 660 QQ-C-390 (E7) C-15345 Gr12

934 QQ-C-390 (E8) C-22229 Gr3

935 B144-935 935 66 QQ-C-390 (E9)


High-leaded tin bronze
937 B144-937 937 64 QQ-C-390 (E10)

938 B144-938 938 67 QQ-C-390 (E6)

943 B144-943 943 QQ-C-390 (E1)

952 B148-952 952 68A QQ-C-390 (G6) C-22229 Gr5

953 B148-953 953 68B QQ-C-390 (G7)

Aluminium bronze 954 B148-954 954 QQ-C-390 (G5) C-15345 Gr13

955 B148-955 955 QQ-C-390 (G3) C-22229 Gr8

958 QQ-C-390 (G8)

Silicon bronze 878 B30 878

296
The following table outlines the chemical composition of various grades of copper alloys.

Chemical composition of copper alloys[3][4]

Cu Sn Pb Zn Fe Al Other
Family CDA AMS UNS Ni [%]
[%] [%] [%] [%] [%] [%] [%]

833 C83300 93 1.5 1.5 4

C83400[5] 90 10
Red brass
836 4855B C83600 85 5 5 5

838 C83800 83 4 6 7

844 C84400 81 3 7 9

Semi-red brass 845 C84500 78 3 7 12

848 C84800 76 3 6 15

C86100[6] 67 0.5 21 3 5 Mn 4

862† C86200 64 26 3 4 Mn 3
Manganese bronze
863† 4862B C86300 63 25 3 6 Mn 3

865 4860A C86500 58 0.5 39.5 1 1 Mn 0.25

903 C90300 88 8 4

0.3
905 4845D C90500 88 10 2
Tin bronze max

0.5 0.5
907 C90700 89 11
max max

922 C92200 88 6 1.5 4.5

923 C92300 87 8 1 max 4


Leaded tin bronze
926 4846A C92600 87 10 1 2

0.7
927 C92700 88 10 2
max

297
932 C93200 83 7 7 3

0.7
934 C93400 84 8 8
max

0.5
935 C93500 85 5 9 1
max
High-leaded tin
bronze 0.7
937 4842A C93700 80 10 10
max

0.75
938 C93800 78 7 15
max

0.7
943 4840A C94300 70 5 25
max

952 C95200 88 3 9

953 C95200 89 1 10

4870B
954 C95400 85 4 11
4872B

C95410[7] 85 4 11 Ni 2
Aluminium bronze
955 C95500 81 4 4 11

C95600[8] 91 7 Si 2

C95700[9] 75 2 3 8 Mn 12

958 C95800 81 5 4 9 Mn 1

C87200[10] 89 Si 4

C87400[11] 83 14 Si 3

C87500[12] 82 14 Si 4
Silicon bronze
C87600[13] 90 5.5 Si 4.5

878 C87800[14] 80 14 Si 4

C87900[15] 65 34 Si 1

298

Chemical composition may vary to yield mechanical properties

[edit] Brasses

Main article: Brass

A brass is an alloy of copper with zinc. Brasses are usually yellow in color. The zinc content can vary
between few % to about 40%; as long as it is kept under 15%, it does not markedly decrease corrosion
resistance of copper.

Brasses can be sensitive to selective leaching corrosion under certain conditions, when zinc is leached
from the alloy (dezincification), leaving behind a spongy copper structure.

[edit] Bronzes

Main article: Bronze

A bronze is an alloy of copper and other metals, most often tin, but also aluminium and silicon.

 Aluminium bronzes are alloys of copper and aluminium. The content of aluminium ranges mostly
between 5-11%. Iron, nickel, manganese and silicon are sometimes added. They have higher
strength and corrosion resistance than other bronzes, especially in marine environment, and have
low reactivity to sulfur compounds. Aluminium forms a thin passivation layer on the surface of
the metal.
 Bell metal
 Phosphor bronze
 Nickel bronzes, e.g. nickel silver and cupronickel
 Speculum metal
 UNS C69100

Biological role

Main article: Copper in health

299
Rich sources of copper include oysters, beef or lamb liver, Brazil nuts, blackstrap molasses, cocoa, and
black pepper. Good sources include lobster, nuts and sunflower seeds, green olives, avocados and wheat
bran.

Copper essentiality

Copper is an essential trace element that is vital to the health of all living things (humans, plants, animals,
and microorganisms). The human body normally contains copper at a level of about 1.4 to 2.1 mg for
each kg of body mass.[43] Copper is distributed widely in the body and occurs in liver, muscle and bone.
Copper is transported in the bloodstream on a plasma protein called ceruloplasmin. When copper is first
absorbed in the gut it is transported to the liver bound to albumin. Copper metabolism and excretion is
controlled delivery of copper to the liver by ceruloplasmin, where it is excreted in bile.

Daily dietary standards for copper have been set by various health agencies around the world. Researchers
specializing in the fields of microbiology, toxicology, nutrition, and health risk assessments are working
together to define precise copper levels required for essentiality while avoiding deficient or excess copper
intakes.

Copper excess and deficiency

It is believed that zinc and copper compete for absorption in the digestive tract so that a diet that is
excessive in one of these minerals may result in a deficiency in the other. The RDA for copper in normal
healthy adults is 0.9 mg/day. On the other hand, professional research on the subject recommends
3.0 mg/day.[44] Because of its role in facilitating iron uptake, copper deficiency can often produce anemia-
like symptoms. Conversely, an accumulation of copper in body tissues are believed to cause the
symptoms of Wilson's disease in humans. Copper deficiency is also associated with neutropenia, bone
abnormalities, hypopigmentation, impaired growth, increased incidence of infections, and abnormalities
in glucose and cholesterol metabolism.[45] Severe deficiency can be found by testing for low plasma or
serum copper levels, low ceruloplasmin, and low red blood cell superoxide dismutase (SOD) levels.[45]
However, these tests are not sensitive to marginal but not severe copper status.[45] The "cytochrome c
oxidase activity of leucocytes and platelets" is another sign of deficiency, but the results have not been
confirmed by replication.[45]

Chronic copper depletion leads to abnormalities in metabolism of fats, high triglycerides, non-alcoholic
steatohepatitis (NASH), fatty liver disease and poor melanin and dopamine synthesis causing depression
and sunburn.

Toxicity and precautions


NFPA 704

300
0

Fire
diamond for
copper metal

Main article: copper toxicity

Toxicity can occur from eating acidic food that has been cooked with copper cookware. Cirrhosis of the
liver in children (Indian Childhood Cirrhosis) has been linked to boiling milk in copper cookware. The
Merck Manual states that recent studies suggest that a genetic defect is associated with this cirrhosis. [46]
Since copper is actively excreted by the normal body, chronic copper toxicosis in humans without a
genetic defect in copper handling has not been demonstrated.[43] However, large amounts (gram
quantities) of copper salts taken in suicide attempts have produced acute copper toxicity in normal
humans. Equivalent amounts of copper salts (30 mg/kg) are toxic in animals.[47]

Antibacterial properties
Main articles: Antimicrobial properties of copper, Antimicrobial copper alloy touch surfaces, and
Copper alloys in aquaculture

Copper is antibacterial/germicidal, via the oligodynamic effect. For example, brass doorknobs disinfect
themselves of many bacteria within a period of eight hours.[48] Antimicrobial properties of copper are
effective against MRSA,[49] Escherichia coli[50] and other pathogens.[51][52][53] At colder temperatures,
longer times are required to kill bacteria.

Copper kills a variety of potentially harmful pathogens. On February 29, 2008, the United States EPA
registered 275 alloys, containing greater than 65% nominal copper content, as antimicrobial materials. [54]
Registered alloys include pure copper, an assortment of brasses and bronzes, and additional alloys. EPA-
sanctioned tests using Good Laboratory Practices were conducted in order to obtain several antimicrobial
claims valid against: methicillin-resistant Staphylococcus aureus (MRSA), Enterobacter aerogenes,
Escherichia coli O157: H7 and Pseudomonas aeruginosa. The EPA registration allows the manufacturers
of these copper alloys to legally make public health claims as to the health effects of these materials.
Several of the aforementioned bacteria are responsible for a large portion of the nearly two million
hospital-acquired infections contracted each year in the United States.[55] Frequently touched surfaces in
hospitals and public facilities harbor bacteria and increase the risk for contracting infections. Covering
touch surfaces with copper alloys can help reduce microbial contamination associated with hospital-
acquired infections on these surfaces.

Copper extraction techniques

From Wikipedia, the free encyclopedia

Jump to: navigation, search

This article needs additional citations for verification.


Please help improve this article by adding reliable references. Unsourced material may be
challenged and removed. (June 2010)

301
This article is about historical production methods. For modern methods, see Flash smelting .

The Chino open-pit copper mine in New Mexico.

Chalcopyrite

Copper extraction from its ores involves a series of processes. First the ore must usually be
concentrated. Then it must be roasted to convert sulfides to oxides, which are smelted to produce matte.
Finally, it undergoes various refining processes, the final one being electrolytic. The main ore in use
today is chalcopyrite (CuFeS2), which accounts for about 50% of copper production.

For economic and environmental reasons, many of the byproducts of extraction are reclaimed. Sulfur
dioxide gas, for example, is captured and turned into sulfuric acid — which is then used in the extraction
process.

Contents

[hide]

 1 Concentration
 2 Hydrometallurgical extraction
o 2.1 Oxide ores
o 2.2 Secondary ores
 3 Froth flotation
o 3.1 Roasting
o 3.2 Smelting

302
o 3.3 Conversion to blister
o 3.4 Reduction
o 3.5 Electrorefining
 4 Concentrate and copper marketing
 5 See also
 6 Notes
 7 External links

[edit] Concentration

Most copper ores contain only a small percentage of copper metal bound up within valuable ore minerals,
with the remainder of the ore being unwanted rock or gangue minerals, typically silicate minerals or oxide
minerals for which there is often no value. The average grade of copper ores in the 21st century is below
0.6% Cu, with a proportion of ore minerals being less than 2% of the total volume of the ore rock. A key
objective in the metallurgical treatment of any ore is the separation of ore minerals from gangue minerals
within the rock.

The first stage of any process within a metallurgical treatment circuit is comminution, where the rock
particles are reduced in size such that ore particles can be efficiently separated from gangue particles,
thereafter followed by a process of physical liberation of the ore minerals from the rock. The process of
liberation of copper ores depends upon whether they are oxide or sulfide ores.

For oxide ores, a hydrometallurgical liberation process is normally undertaken, which uses the soluble
nature of the ore minerals to the advantage of the metallurgical treatment plant. For sulfide ores, both
secondary (supergene) and primary (unweathered), froth flotation is utilised to physically separate ore
from gangue. For special native copper bearing ore bodies or sections of ore bodies rich in supergen
native copper, this mineral can be recovered by a simple gravity circuit.

[edit] Hydrometallurgical extraction

This section does not cite any references or sources.


Please help improve this article by adding citations to reliable sources. Unsourced material
may be challenged and removed. (October 2008)

[edit] Oxide ores

Oxidised copper ore bodies may be treated via several processes, with hydrometallurgical processes used
to treat oxide ores dominated by copper carbonate minerals such as azurite and malachite, and other
soluble minerals such as silicates like chrysocolla, or sulfates such as atacamite and so on.

Such oxide ores are usually leached by sulfuric acid, usually using a heap leach or dump leach process to
liberate the copper minerals into a solution of sulfuric acid laden with copper sulfate in solution. The
copper sulfate solution (the pregnant leach solution) is then stripped of copper via a solvent extraction and
electrowinning (SX-EW) plant, with the barred sulfuric acid recycled back on to the heaps. Alternatively,
the copper can be precipitated out of the pregnant solution by contacting it with scrap iron; a process
called cementation. Cement copper is normally less pure than SX_EW copper. Commonly sulfuric acid is

303
used as a leachant for copper oxide, although it is possible to use water, particularly for ores rich in ultra-
soluble sulfate minerals.[citation needed]

In general froth flotation is not used to concentrate copper oxide ores, as oxide minerals are not
responsive to the froth flotation chemicals or process (i.e.; they do not bind to the kerosene-based
chemicals). Copper oxide ores have occasionally been treated via froth floatation via sulfidation of the
oxide minerals with certain chemicals which react with the oxide mineral particles to produce a thin rime
of sulfide (usually chalcocite), which can then be activated by
the froth floatation plant.
%
Copper
Name Formula
when
pure

CuFeS2 34.5

Chalcopyrite

Cu2S 79.8

Chalcocite

CuS 66.5

Covellite

2Cu2S•CuS•FeS 63.3

Bornite

Cu3SbS3 +
32-45
x(Fe,Zn)6Sb2S9

Tetrahedrite

304
[edit] Secondary ores

Secondary sulfides - those formed by supergene secondary CuCO3•Cu(OH)2 57.3


enrichment - are resistant (refractory) to sulfuric leaching.
These ores are a mixture of copper carbonate, sulfate, Malachite
phosphate, and oxide minerals and secondary sulfide minerals,
dominantly chalcocite but other minerals such as digenite can
be important in some deposits..
2CuCO3•Cu(OH)2 55.1
Supergene ores rich in sulfides may be concentrated using froth
flotation. A typical concentrate of chalcocite can grade between Azurite
37% Cu to 40% Cu in sulfide, making them relatively cheap to
smelt compared to chalcopyrite concentrates.

Some supergene sulfide deposits can be leached using a


Cu2O 88.8
bacterial oxidation heap leach process to oxidize the sulfides to
sulfuric acid, which also allows for simultaneous leaching with
sulfuric acid to produce a copper sulfate solution. As with oxide Cuprite
ores, solvent extraction and electrowinning technologies are
used to recover the copper from the pregnant leach solution.

Supergene sulfide ores rich in native copper minerals are CuO•SiO2•2H2O 37.9
refractory to treatment with sulfuric acid leaching on all
practicable time scales, and the dense metal particles do not Chrysocolla
react with froth flotation media. Typically, if native copper is a
minor part of a supergene profile it will not be recovered and Copper-bearing Minerals[1]
will report to the tailings. When rich enough, native copper ore
bodies may be treated to recover the contained copper via a gravity separation circuit where the density of
the metal is used to liberate it from the lighter silicate minerals. Often, the nature of the gangue is
important, as clay-rich native copper ores prove difficult to liberate.

[edit] Froth flotation

This section needs additional citations for verification.


Please help improve this article by adding reliable references. Unsourced material may be
challenged and removed. (November 2009)

Main article: Froth flotation

305
Froth flotation cells to concentrate copper and nickel sulfide minerals, Falconbridge, Ontario.

The modern froth flotation process was independently invented the early 1900s in Australia by C.V Potter
and around the same time by G. D. Delprat.[2]

Copper sulphide loaded air bubbles on a Jameson cell at the flotation plant of the Prominent Hill mine in
South Australia

At the current level of technology all primary sulfide ores of copper sulfides, and most concentrates of
secondary copper sulfides (being chalcocite), require smelting to produce copper from the sulfide
minerals. Some experimental hydrometallurgical techniques to process chalcopyrite are being
investigated but as of 2009 are unproven outside of laboratories.[citation needed] Some vat leach or pressure
leach processes exist to solubilise chalcocite concentrates and produce copper cathode from the resulting
leachate solution, but this is a minor part of the market.

Carbonate concentrates are a relatively minor product produced from copper cementation plants, typically
as the end-stage of a heap-leach operation. Such carbonate concentrates can be treated by a SX-EW plant
or smelted.

The copper ore is crushed and ground to a size such that an acceptably high degree of liberation has
occurred between the copper sulfide ore minerals and the gangue minerals. The ore is then wet, suspended
in a slurry, and mixed with xanthate reagents (or other reagents of the thiol class), which react with the
copper sulfide mineral particle to make it hydrophobic on its surface. (Besides xanthates,
dithiophosphates and thionocarbamates are commonly used).

The treated ore is introduced to a water-filled aeration tank containing surfactant such as methylisobutyl
carbinol (MIBC) which is an alcohol. Air is constantly forced through the slurry and the air bubbles attach
to the hydrophobic copper sulfide particles, which are conducted to the surface, where they form a froth
and are skimmed off. These skimmings are generally subjected to a cleaner-scavenger cell to remove
excess silicates and to remove other sulfide minerals which can deleteriously impact the concentrate
quality (typically, galena), and the final concentrate sent for smelting.

The rock which has not floated off in the floatation cell is either discarded as tailings, or processed to
extract other elements or other ore minerals such as galena, sphalerite if they exist.

306
To improve the process efficiency, lime is used to raise the pH of the water bath, causing the collector to
ionize more and to preferentially bond to chalcopyrite (CuFeS2) and avoid the pyrite (FeS2). Iron exists in
both primary zone minerals.

Copper ores containing chalcopyrite can be concentrated to produce a concentrate with between 20% and
30% copper-in-concentrate (usually 27-29% Cu); the remainder of the concentrate is iron and sulfur in the
chalcopyrite, and unwanted impurities such as silicate gangue minerals or other sulfide minerals, typically
minor amounts of pyrite, sphalerite or galena.

Chalcocite concentrates typically grade between 37% and 40% copper-in-concentrate, as chalcocite has
no iron within the mineral.

[edit] Roasting

See also: Roasting (metallurgy)

In the roaster, the copper concentrate is partially oxidised to produce calcine and sulfur dioxide gas. The
stoichiometry of the reaction which takes place is:

2CuFeS2(s) + 3O2(g) → 2FeO(s) + 2CuS(s) + 2SO2(g)

As of 2005, roasting is no longer common in copper concentrate treatment. Direct smelting using the
following smelting technologies; flash smelting, Noranda, ISASmelt, Mitsubishi or El Teniente furnace
are now used.

[edit] Smelting

The calcine is then mixed with silica and coke and smelted at 1200 °C (in an exothermic reaction) to form
a liquid called copper matte. This temperature allows reactions to proceed rapidly, and allow the matte
and slag to melt, so they can be tapped out of the furnace. In copper recycling, this is the point where
scrap copper is introduced.

Several reactions occur.

For example iron oxides and sulfides are converted to slag which is floated off the matte. The
reactions for this are:

FeO(s) + SiO2 (s) → FeO.SiO2 (l)

In a parallel reaction the iron sulfide is converted to slag:

2FeS(l) + 3O2 + 2SiO2 (l) → 2FeO.SiO2(l) + 2SO2(g)

The slag is discarded or reprocessed to recover any remaining copper.

[edit] Conversion to blister

307
The matte, which is produced in the smelter, contains around 70% copper primarily as copper sulfide as
well as iron sulfide. The sulfur is removed at high temperature as sulfur dioxide by blowing air through
molten matte:

2Cu2S + 3O2 → 2Cu2O + 2SO2

Cu2S + 2Cu2O → 6Cu + SO2

In a parallel reaction the iron sulfide is converted to slag:

2FeS + 3O2 → 2FeO + 2SO2

2FeO + 2SiO2 → 2FeSiO3

The end product is (about) 98% pure copper known as blister because of the broken surface created by the
escape of sulfur dioxide gas as the copper ingots are cast. By-products generated in the process are sulfur
dioxide and slag.

[edit] Reduction

The blistered copper is put into an anode furnace (a furnace that uses the blister copper as anode) to get
rid of most of the remaining oxygen. This is done by blowing natural gas through the molten copper
oxide. When this flame burns green, indicating the copper oxidation spectrum, the oxygen has mostly
been burned off. This creates copper at about 99% pure. The anodes produced from this are fed to the
electrorefinery.

[edit] Electrorefining

Apparatus for electrolytic refining of copper

The copper is refined by electrolysis. The anodes cast from processed blister copper are placed into an
aqueous solution of 3-4% copper sulfate and 10-16% sulfuric acid. Cathodes are thin rolled sheets of
highly pure copper. A potential of only 0.2-0.4 volts is required for the process to commence. At the
anode, copper and less noble metals dissolve. More noble metals such as silver and gold as well as
selenium and tellurium settle to the bottom of the cell as anode slime, which forms a saleable byproduct.

308
Copper(II) ions migrate through the electrolyte to the cathode. At the cathode, copper metal plates out but
less noble constituents such as arsenic and zinc remain in solution.[1] The reactions are:

At the anode: Cu(s) → Cu2+(aq) + 2e–

At the cathode: Cu2+(aq) + 2e– → Cu(s)

[edit] Concentrate and copper marketing

Copper concentrates produced by mines are sold to smelters and refiners who treat the ore and refine the
copper and charge for this service via treatment charges (TC's) and refining charges (RC's). The TC's are
charged in US$ per tonne of concentrate treated and RC's are charged in cents per pound treated,
denominated in US dollars, with benchmark prices set annually by major Japanese smelters. The customer
in this case can be a smelter, who on-sells blister copper ingots to a refiner, or a smelter-refiner which is
vertically integrated.

The typical contract for a miner is denominated against the London Metal Exchange price, minus the TC-
RCs and any applicable penalties or credits. Penalties may be assessed against copper concentrates
according to the level of deleterious elements such as arsenic, bismuth, lead or tungsten. Because a large
portion of copper sulfide ore bodies contain silver or gold in appreciable amounts, a credit can be paid to
the miner for these metals if their concentration within the concentrate is above a certain amount. Usually
the refiner or smelter charges the miner a fee based on the concentration; a typical contract will say a
credit is due for every ounce of the metal in concentrate above a certain concentration; below that if it is
recovered the smelter will keep the metal and sell it to defray costs.

Copper concentrate is traded either via spot contracts or under long term contracts as an intermediate
product in its own right. Often the smelter sells the copper metal itself on behalf of the miner. The miner
is paid the price at the time that the smelter-refiner makes the sale, not at the price on the date of delivery
of the concentrate. Under a Quotational Pricing system, the price is agreed to be at a fixed date in the
future, typically 90 days from time of delivery to the smelter.

A-grade copper cathode is of 99.999% copper in sheets that are 1 cm thick, and approximately 1 meter
square weighing approximately 200 pounds. It is a true commodity, deliverable to and tradeable upon the
metal exchanges in New York (COMEX), London (London Metals Exchange) and Shanghai (Shanghai
Futures Exchange). Often copper cathode is traded upon the exchanges indirectly via warrants, options, or
swap contracts such that the majority of copper is traded upon the LME/COMEX/SFE but delivery is
achieved indirectly and at remove from the physical warehouses themselves.

The chemical specification for electrolytic grade copper is ASTM B 115-00 (a standard that specifies the
purity and maximum electrical resistivity of the product).

Aluminium

From Wikipedia, the free encyclopedia

Jump to: navigation, search

This article is about the metallic element. For other uses, see Aluminium (disambiguation).

309
magnesium ← aluminium → silicon

B

Al

Ga

13Al

Periodic table

Appearance

Spectral lines of aluminium

General properties

310
Name, symbol, number aluminium, Al, 13

i
UK /ˌæljʉˈmɪniəm/
AL-ew-MIN-ee-əm; or
Pronunciation
i
US /əˈluːmɪnəm/
ə-LOO-mi-nəm
Element category other metal

Group, period, block 13, 3, p

Standard atomic weight 26.9815386g·mol−1

Electron configuration [Ne] 3s2 3p1

Electrons per shell 2, 8, 3 (Image)

Physical properties

Phase solid

Density (near r.t.) 2.70 g·cm−3

Liquid density at m.p. 2.375 g·cm−3

Melting point 933.47 K660.32 ° ,C1220.58 ° ,F

Boiling point 2792 K2519 ° ,C4566 ° ,F

Heat of fusion 10.71 kJ·mol−1

Heat of vaporization 294.0 kJ·mol−1

Specific heat capacity (25 °C) 24.200 J·mol−1·K−1

Vapor pressure

P (Pa) 1 10 100 1k 10 k 100 k

at T (K) 1482 1632 1817 2054 2364 2790

Atomic properties

311
3, 2[1], 1[2]
Oxidation states
(amphoteric oxide)

Electronegativity 1.61 (Pauling scale)

Ionization energies 1st: 577.5 kJ·mol−1


(more)
2nd: 1816.7 kJ·mol−1

3rd: 2744.8 kJ·mol−1

Atomic radius 143 pm

Covalent radius 121±4 pm

Van der Waals radius 184 pm

Miscellanea

Crystal structure face-centered cubic

Magnetic ordering paramagnetic[3]

Electrical resistivity (20 °C) 28.2 nΩ·m

Thermal conductivity (300 K) 237 W·m−1·K−1

Thermal expansion (25 °C) 23.1 µm·m−1·K−1

Speed of sound (thin rod) (r.t.) (rolled) 5,000 m·s−1

Young's modulus 70 GPa

Shear modulus 26 GPa

Bulk modulus 76 GPa

Poisson ratio 0.35

Mohs hardness 2.75

Vickers hardness 167 MPa

312
Brinell hardness 245 MPa

CAS registry number 7429-90-5

Most stable isotopes

Main article: Isotopes of aluminium

iso NA half-life DM DE (MeV) DP

β+ 1.17 26
Mg

26
Al trace 7.17×105y ε - 26
Mg

γ 1.8086 -

27 27
Al 100% Al is stable with 14 neutrons

v·d·e

Aluminium (UK i /ˌæljʉˈmɪniəm/ AL-ew-MIN-ee-əm)[4] or aluminum (US i /əˈluːmɪnəm/ ə-LOO-mi-


nəm) is a silvery white member of the boron group of chemical elements. It has the symbol Al and its
atomic number is 13. It is not soluble in water under normal circumstances. Aluminium is the most
abundant metal in the Earth's crust, and the third most abundant element, after oxygen and silicon. It
makes up about 8% by weight of the Earth's solid surface. Aluminium is too reactive chemically to occur
in nature as a free metal. Instead, it is found combined in over 270 different minerals.[5] The chief source
of aluminium is bauxite ore.

Aluminium is remarkable for the metal's low density and for its ability to resist corrosion due to the
phenomenon of passivation. Structural components made from aluminium and its alloys are vital to the
aerospace industry and are very important in other areas of transportation and building. Its reactive nature
makes it useful as a catalyst or additive in chemical mixtures, including ammonium nitrate explosives, to
enhance blast power.

Despite its prevalence in the environment, aluminium salts are not known to be used by any form of life.
Also in keeping with the element's abundance, it is well tolerated[citation needed] by plants in soils (in which it
is a major component), and to a lesser extent, by animals as a component of plant materials in the diet
(which often contain traces of dust and soil). Soluble aluminium salts have some demonstrated toxicity to
animals if delivered in quantity by unnatural routes, such as injection. Controversy still exists about
aluminium's possible long-term toxicity to humans from larger ingested amounts.

Contents

[hide]

 1 Characteristics

313
 2 Creation
 3 Isotopes
 4 Natural occurrence
 5 Production and refinement
 6 Recycling
 7 Chemistry
o 7.1 Oxidation state +1
o 7.2 Oxidation state +2
o 7.3 Oxidation state +3
o 7.4 Analysis
 8 Applications
o 8.1 General use
o 8.2 Aluminium compounds
o 8.3 Aluminium alloys in structural applications
o 8.4 Household wiring
 9 History
 10 Etymology
o 10.1 Nomenclature history
o 10.2 Present-day spelling
 11 Health concerns
 12 Effect on plants
 13 See also
 14 References
 15 External links

Characteristics

Etched surface from a high purity (99.9998%) aluminium bar, size 55×37 mm

Aluminium is a soft, durable, lightweight, ductile and malleable metal with appearance ranging from
silvery to dull gray, depending on the surface roughness. Aluminium is nonmagnetic and nonsparking. It
is also insoluble in alcohol, though it can be soluble in water in certain forms. The yield strength of pure
aluminium is 7–11 MPa, while aluminium alloys have yield strengths ranging from 200 MPa to 600
MPa.[6] Aluminium has about one-third the density and stiffness of steel. It is easily machined, cast,
drawn and extruded.

314
Corrosion resistance can be excellent due to a thin surface layer of aluminium oxide that forms when the
metal is exposed to air, effectively preventing further oxidation. The strongest aluminium alloys are less
corrosion resistant due to galvanic reactions with alloyed copper.[6] This corrosion resistance is also often
greatly reduced when many aqueous salts are present, particularly in the presence of dissimilar metals.

Aluminium atoms are arranged in a face-centered cubic (fcc) structure. Aluminium has a stacking-fault
energy of approximately 200 mJ/m2.[7]

Aluminium is one of the few metals that retain full silvery reflectance in finely powdered form, making it
an important component of silver paints. Aluminium mirror finish has the highest reflectance of any metal
in the 200–400 nm (UV) and the 3,000–10,000 nm (far IR) regions; in the 400–700 nm visible range it is
slightly outperformed by tin and silver and in the 700–3000 (near IR) by silver, gold, and copper.[8]

Aluminium is a good thermal and electrical conductor, having 62% the conductivity of copper.
Aluminium is capable of being a superconductor, with a superconducting critical temperature of 1.2
kelvins and a critical magnetic field of about 100 gauss (10 milliteslas).[9]

Creation

Stable aluminium is created when hydrogen fuses with magnesium either in large stars or in
supernovae.[10]

Isotopes

Main article: Isotopes of aluminium

Aluminium has many known isotopes, whose mass numbers range from 21 to 42; however, only 27Al
(stable isotope) and 26Al (radioactive isotope, t1/2 = 7.2×105 y) occur naturally. 27Al has a natural
abundance above 99.9%. 26Al is produced from argon in the atmosphere by spallation caused by cosmic-
ray protons. Aluminium isotopes have found practical application in dating marine sediments, manganese
nodules, glacial ice, quartz in rock exposures, and meteorites. The ratio of 26Al to 10Be has been used to
study the role of transport, deposition, sediment storage, burial times, and erosion on 105 to 106 year time
scales.[11] Cosmogenic 26Al was first applied in studies of the Moon and meteorites. Meteoroid fragments,
after departure from their parent bodies, are exposed to intense cosmic-ray bombardment during their
travel through space, causing substantial 26Al production. After falling to Earth, atmospheric shielding
drastically reduces 26Al production, and its decay can then be used to determine the meteorite's terrestrial
age. Meteorite research has also shown that 26Al was relatively abundant at the time of formation of our
planetary system. Most meteorite scientists believe that the energy released by the decay of 26Al was
responsible for the melting and differentiation of some asteroids after their formation 4.55 billion years
ago.[12]

Natural occurrence

See also: Aluminium in Africa

In the Earth's crust, aluminium is the most abundant (8.3% by weight) metallic element and the third most
abundant of all elements (after oxygen and silicon).[13] Because of its strong affinity to oxygen, it is
almost never found in the elemental state; instead it is found in oxides or silicates. Feldspars, the most
common group of minerals in the Earth's crust, are aluminosilicates. Native aluminium metal can be

315
found as a minor phase in low oxygen fugacity environments, such as the interiors of certain
volcanoes.[14] It also occurs in the minerals beryl, cryolite, garnet, spinel and turquoise.[15] Impurities in
Al2O3, such as chromium or cobalt yield the gemstones ruby and sapphire, respectively.[13] Pure Al2O3,
known as corundum, is one of the hardest materials known.[15]

Although aluminium is an extremely common and widespread element, the common aluminium minerals
are not economic sources of the metal. Almost all metallic aluminium is produced from the ore bauxite
(AlOx(OH)3-2x). Bauxite occurs as a weathering product of low iron and silica bedrock in tropical climatic
conditions.[16] Large deposits of bauxite occur in Australia, Brazil, Guinea and Jamaica but the primary
mining areas for the ore are in Ghana, Indonesia, Russia and Surinam.[17] Smelting of the ore mainly
occurs in Australia, Brazil, Canada, Norway, Russia and the United States.[17] Because smelting is an
energy-intensive process, regions with excess natural gas supplies (such as the United Arab Emirates) are
becoming aluminium refiners.

Production and refinement

Although aluminium is the most abundant metallic element in the Earth's crust, it is never found in free,
metallic form, and it was once considered a precious metal more valuable than gold. Napoleon III,
Emperor of France, is reputed to have given a banquet where the most honoured guests were given
aluminium utensils, while the others made do with gold.[18][19] The Washington Monument was
completed, with the 100 ounce (2.8 kg) aluminium capstone being put in place on December 6, 1884, in
an elaborate dedication ceremony. It was the largest single piece of aluminium cast at the time, when
aluminium was as expensive as silver.[20] Aluminium has been produced in commercial quantities for just
over 100 years.

Bauxite

Aluminium is a strongly reactive metal that forms a high-energy chemical bond with oxygen. Compared
to most other metals, it is difficult to extract from ore, such as bauxite, due to the energy required to
reduce aluminium oxide (Al2O3). For example, direct reduction with carbon, as is used to produce iron, is
not chemically possible, since aluminium is a stronger reducing agent than carbon. There is an indirect
carbothermic reduction possible by using carbon and Al2O3, which forms an intermediate Al4C3 and this
can further yield aluminium metal at a temperature of 1900–2000°C. This process is still under
development. This process costs less energy and yields less CO2 than the Hall-Héroult process, the major
industrial process for aluminium extraction.[21] Aluminium oxide has a melting point of about 2,000 °C
(3,600 °F). Therefore, it must be extracted by electrolysis. In this process, the aluminium oxide is
dissolved in molten cryolite with calcium fluoride and then reduced to the pure metal. The operational

316
temperature of the reduction cells is around 950 to 980 °C (1,740 to 1,800 °F). Cryolite is found as a
mineral in Greenland, but in industrial use it has been replaced by a synthetic substance. Cryolite is a
chemical compound of aluminium and sodium fluorides: (Na3AlF6). The aluminium oxide (a white
powder) is obtained by refining bauxite in the Bayer process of Karl Bayer. (Previously, the Deville
process was the predominant refining technology.)

The electrolytic process replaced the Wöhler process, which involved the reduction of anhydrous
aluminium chloride with potassium. Both of the electrodes used in the electrolysis of aluminium oxide are
carbon. Once the refined alumina is dissolved in the electrolyte, its ions are free to move around. The
reaction at the cathode is:

Al3+ + 3 e− → Al

Here the aluminium ion is being reduced. The aluminium metal then sinks to the bottom and is tapped off,
usually cast into large blocks called aluminium billets for further processing.

At the anode, oxygen is formed:

2 O2− → O2 + 4 e−

This carbon anode is then oxidized by the oxygen, releasing carbon dioxide:

O2 + C → CO2

The anodes in a reduction cell must therefore be replaced regularly, since they are consumed in the
process.

Unlike the anodes, the cathodes are not oxidized because there is no oxygen present, as the carbon
cathodes are protected by the liquid aluminium inside the cells. Nevertheless, cathodes do erode, mainly
due to electrochemical processes and metal movement. After five to ten years, depending on the current
used in the electrolysis, a cell has to be rebuilt because of cathode wear.

World production trend of aluminium

Aluminium electrolysis with the Hall-Héroult process consumes a lot of energy, but alternative processes
were always found to be less viable economically and/or ecologically. The worldwide average specific
energy consumption is approximately 15±0.5 kilowatt-hours per kilogram of aluminium produced (52 to

317
56 MJ/kg). The most modern smelters achieve approximately 12.8 kW·h/kg (46.1 MJ/kg). (Compare this
to the heat of reaction, 31 MJ/kg, and the Gibbs free energy of reaction, 29 MJ/kg.) Reduction line
currents for older technologies are typically 100 to 200 kiloamperes; state-of-the-art smelters operate at
about 350 kA. Trials have been reported with 500 kA cells.

Electric power represents about 20% to 40% of the cost of producing aluminium, depending on the
location of the smelter. Smelters tend to be situated where electric power is both plentiful and
inexpensive, such as South Africa, Ghana, the South Island of New Zealand, Australia, the People's
Republic of China, the Middle East, Russia, Quebec and British Columbia in Canada, and Iceland.[22]

Aluminium output in 2005

In 2005, the People's Republic of China was the top producer of aluminium with almost a one-fifth world
share, followed by Russia, Canada, and the USA, reports the British Geological Survey.

Over the last 50 years, Australia has become a major producer of bauxite ore and a major producer and
exporter of alumina.[23] Australia produced 62 million tonnes of bauxite in 2005. The Australian deposits
have some refining problems, some being high in silica, but have the advantage of being shallow and
relatively easy to mine.[24]

See also: Category:Aluminium minerals

Recycling

Aluminium recycling code

Main article: Aluminium recycling

318
Aluminium is 100% recyclable without any loss of its natural qualities. Recovery of the metal via
recycling has become an important facet of the aluminium industry.

Recycling involves melting the scrap, a process that requires only 5% of the energy used to produce
aluminium from ore, though a significant part (up to 15% of the input material) is lost as dross (ash-like
oxide).[25] The dross can undergo a further process to extract aluminium.

Recycling was a low-profile activity until the late 1960s, when the growing use of aluminium beverage
cans brought it to the public awareness.

In Europe aluminium experiences high rates of recycling, ranging from 42% of beverage cans, 85% of
construction materials and 95% of transport vehicles.[26]

Recycled aluminium is known as secondary aluminium, but maintains the same physical properties as
primary aluminium. Secondary aluminium is produced in a wide range of formats and is employed in
80% of the alloy injections. Another important use is for extrusion.

White dross from primary aluminium production and from secondary recycling operations still contains
useful quantities of aluminium that can be extracted industrially.[27] The process produces aluminium
billets, together with a highly complex waste material. This waste is difficult to manage. It reacts with
water, releasing a mixture of gases (including, among others, hydrogen, acetylene, and ammonia), which
spontaneously ignites on contact with air;[28] contact with damp air results in the release of copious
quantities of ammonia gas. Despite these difficulties, the waste has found use as a filler in asphalt and
concrete.[29]

Chemistry

Oxidation state +1

AlH is produced when aluminium is heated in an atmosphere of hydrogen. Al2O is made by heating the
normal oxide, Al2O3, with silicon at 1,800 °C (3,272 °F) in a vacuum.[30]

Al2S can be made by heating Al2S3 with aluminium shavings at 1,300 °C (2,372 °F) in a vacuum.[30] It
quickly disproportionates to the starting materials. The selenide is made in a parallel manner.

AlF, AlCl and AlBr exist in the gaseous phase when the tri-halide is heated with aluminium. Aluminium
halides usually exist in the form AlX3, where X is F, Cl, Br, or I.[30]

Oxidation state +2

Aluminium monoxide, AlO, has been detected in the gas phase after explosion[31] and in stellar absorption
spectra.[32]

Oxidation state +3

Fajans' rules show that the simple trivalent cation Al3+ is not expected to be found in anhydrous salts or
binary compounds such as Al2O3. The hydroxide is a weak base and aluminium salts of weak acids, such
as carbonate, cannot be prepared. The salts of strong acids, such as nitrate, are stable and soluble in water,
forming hydrates with at least six molecules of water of crystallization.

319
Aluminium hydride, (AlH3)n, can be produced from trimethylaluminium and an excess of hydrogen. It
burns explosively in air. It can also be prepared by the action of aluminium chloride on lithium hydride in
ether solution, but cannot be isolated free from the solvent. Alumino-hydrides of the most electropositive
elements are known, the most useful being lithium aluminium hydride, Li[AlH4]. It decomposes into
lithium hydride, aluminium and hydrogen when heated, and is hydrolysed by water. It has many uses in
organic chemistry, particularly as a reducing agent. The aluminohalides have a similar structure.

Aluminium hydroxide may be prepared as a gelatinous precipitate by adding ammonia to an aqueous


solution of an aluminium salt. It is amphoteric, being both a very weak acid, and forming aluminates with
alkalis. It exists in various crystalline forms.

Aluminium carbide, Al4C3 is made by heating a mixture of the elements above 1,000 °C (1,832 °F). The
pale yellow crystals have a complex lattice structure, and react with water or dilute acids to give methane.
The acetylide, Al2(C2)3, is made by passing acetylene over heated aluminium.

Aluminium nitride, AlN, can be made from the elements at 800 °C (1,472 °F). It is hydrolysed by water to
form ammonia and aluminium hydroxide. Aluminium phosphide, AlP, is made similarly, and hydrolyses
to give phosphine.

Aluminium oxide, Al2O3, occurs naturally as corundum, and can be made by burning aluminium in
oxygen or by heating the hydroxide, nitrate or sulfate. As a gemstone, its hardness is only exceeded by
diamond, boron nitride, and carborundum. It is almost insoluble in water. Aluminium sulfide, Al2S3, may
be prepared by passing hydrogen sulfide over aluminium powder. It is polymorphic.

Aluminium iodide, AlI3, is a dimer with applications in organic synthesis. Aluminium fluoride, AlF3, is
made by treating the hydroxide with HF, or can be made from the elements. It is a macromolecule, which
sublimes without melting at 1,291 °C (2,356 °F). It is very inert. The other trihalides are dimeric, having a
bridge-like structure.

When aluminium and fluoride are together in aqueous solution, they readily form complex ions such as
[AlF(H2O)5]2+, AlF3(H2O)3, and [AlF6]3−. Of these, [AlF6]3− is the most stable. This is explained by the
fact that aluminium and fluoride, which are both very compact ions, fit together just right to form the
octahedral aluminium hexafluoride complex. When aluminium and fluoride are together in water in a 1:6
molar ratio, [AlF6]3− is the most common form, even in rather low concentrations.

Organometallic compounds of empirical formula AlR3 exist and, if not also polymers, are at least dimers
or trimers. They have some uses in organic synthesis, for instance trimethylaluminium.

Analysis

The presence of aluminium can be detected in qualitative analysis using aluminon.

Applications

General use

Aluminium is the most widely used non-ferrous metal.[33] Global production of aluminium in 2005 was
31.9 million tonnes. It exceeded that of any other metal except iron (837.5 million tonnes).[34] Forecast for
2012 is 42–45 million tons, driven by rising Chinese output.[35] Relatively pure aluminium is encountered

320
only when corrosion resistance and/or workability is more important than strength or hardness. A thin
layer of aluminium can be deposited onto a flat surface by physical vapour deposition or (very
infrequently) chemical vapour deposition or other chemical means to form optical coatings and mirrors.
When so deposited, a fresh, pure aluminium film serves as a good reflector (approximately 92%) of
visible light and an excellent reflector (as much as 98%) of medium and far infrared radiation.

Pure aluminium has a low tensile strength, but when combined with thermo-mechanical processing,
aluminium alloys display a marked improvement in mechanical properties, especially when tempered.
Aluminium alloys form vital components of aircraft and rockets as a result of their high strength-to-
weight ratio. Aluminium readily forms alloys with many elements such as copper, zinc, magnesium,
manganese and silicon (e.g., duralumin). Today, almost all bulk metal materials that are referred to
loosely as "aluminium", are actually alloys. For example, the common aluminium foils and beverage cans
are alloys of 92% to 99% aluminium.[36]

Household aluminium foil

Aluminium-bodied Austin "A40 Sports" (circa 1951)

321
Aluminium slabs being transported from a smelter.

Some of the many uses for aluminium metal are in:

 Transportation (automobiles, aircraft, trucks, railway cars, marine vessels, bicycles etc.) as sheet,
tube, castings etc.
 Packaging (cans, foil, etc.)
 Construction (windows, doors, siding, building wire, etc.)
 A wide range of household items, from cooking utensils to baseball bats, watches.[37]
 Street lighting poles, sailing ship masts, walking poles etc.
 Outer shells of consumer electronics, also cases for equipment e.g. photographic equipment.
 Electrical transmission lines for power distribution
 MKM steel and Alnico magnets
 Super purity aluminium (SPA, 99.980% to 99.999% Al), used in electronics and CDs.
 Heat sinks for electronic appliances such as transistors and CPUs.
 Substrate material of metal-core copper clad laminates used in high brightness LED lighting.
 Powdered aluminium is used in paint, and in pyrotechnics such as solid rocket fuels and thermite.
 Aluminium can be reacted with hydrochloric acid to form hydrogen gas.
 A variety of countries, including France, Italy, Poland, Finland, Romania, Israel, and the former
Yugoslavia, have issued coins struck in aluminium or aluminium-copper alloys.[38]
 Some guitar models sports aluminium diamond plates on the surface of the instruments, usually
either chrome or black. Kramer Guitars and Travis Bean are both known for having produced
guitars with necks made of aluminium, which gives the instrument a very distinct sound.

Aluminium compounds

 Aluminium ammonium sulfate ([Al(NH4)](SO4)2), ammonium alum is used as a mordant, in


water purification and sewage treatment, in paper production, as a food additive, and in leather
tanning.

 Aluminium acetate is a salt used in solution as an astringent.

 Aluminium borate (Al2O3 B2O3) is used in the production of glass and ceramic.

 Aluminium borohydride (Al(BH4)3) is used as an additive to jet fuel.


 Aluminium bronze (CuAl5)

322
 Aluminium chloride (AlCl3) is used in paint manufacturing, in antiperspirants, in petroleum
refining and in the production of synthetic rubber.

 Aluminium chlorohydrate is used as an antiperspirant and in the treatment of hyperhidrosis.

 Aluminium fluorosilicate (Al2(SiF6)3) is used in the production of synthetic gemstones, glass and
ceramic.

 Aluminium hydroxide (Al(OH)3) is used: as an antacid, as a mordant, in water purification, in the


manufacture of glass and ceramic and in the waterproofing of fabrics.

 Aluminium oxide (Al2O3), alumina, is found naturally as corundum (rubies and sapphires),
emery, and is used in glass making. Synthetic ruby and sapphire are used in lasers for the
production of coherent light. Used as a refractory, essential for the production of high pressure
sodium lamps.

 Aluminium phosphate (AlPO4) is used in the manufacture: of glass and ceramic, pulp and paper
products, cosmetics, paints and varnishes and in making dental cement.

 Aluminium sulfate (Al2(SO4)3) is used in the manufacture of paper, as a mordant, in a fire


extinguisher, in water purification and sewage treatment, as a food additive, in fireproofing, and
in leather tanning.

 Aqueous aluminium ions (such as found in aqueous aluminium sulfate) are used to treat against
fish parasites such as Gyrodactylus salaris.

 In many vaccines, certain aluminium salts serve as an immune adjuvant (immune response
booster) to allow the protein in the vaccine to achieve sufficient potency as an immune stimulant.

Aluminium alloys in structural applications

323
Aluminium foam

Main article: Aluminium alloy

Aluminium alloys with a wide range of properties are used in engineering structures. Alloy systems are
classified by a number system (ANSI) or by names indicating their main alloying constituents (DIN and
ISO).

The strength and durability of aluminium alloys vary widely, not only as a result of the components of the
specific alloy, but also as a result of heat treatments and manufacturing processes. A lack of knowledge of
these aspects has from time to time led to improperly designed structures and gained aluminium a bad
reputation.

One important structural limitation of aluminium alloys is their fatigue strength. Unlike steels, aluminium
alloys have no well-defined fatigue limit, meaning that fatigue failure eventually occurs, under even very
small cyclic loadings. This implies that engineers must assess these loads and design for a fixed life rather
than an infinite life.

Another important property of aluminium alloys is their sensitivity to heat. Workshop procedures
involving heating are complicated by the fact that aluminium, unlike steel, melts without first glowing
red. Forming operations where a blow torch is used therefore require some expertise, since no visual signs
reveal how close the material is to melting. Aluminium alloys, like all structural alloys, also are subject to
internal stresses following heating operations such as welding and casting. The problem with aluminium
alloys in this regard is their low melting point, which make them more susceptible to distortions from
thermally induced stress relief. Controlled stress relief can be done during manufacturing by heat-treating
the parts in an oven, followed by gradual cooling—in effect annealing the stresses.

The low melting point of aluminium alloys has not precluded their use in rocketry; even for use in
constructing combustion chambers where gases can reach 3500 K. The Agena upper stage engine used a
regeneratively cooled aluminium design for some parts of the nozzle, including the thermally critical
throat region.

Household wiring

See also: Aluminium wire

Compared to copper, aluminium has about 65% of the electrical conductivity by volume, although 200%
by weight. Traditionally copper is used as household wiring material. In the 1960s aluminium was
considerably cheaper than copper, and so was introduced for household electrical wiring in the United
States, even though many fixtures were not designed to accept aluminium wire. In some cases the greater
coefficient of thermal expansion of aluminium causes the wire to expand and contract relative to the
dissimilar metal screw connection, eventually loosening the connection. Also, pure aluminium has a
tendency to creep under steady sustained pressure (to a greater degree as the temperature rises), again
loosening the connection. Finally, galvanic corrosion from the dissimilar metals increased the electrical
resistance of the connection.

All of this resulted in overheated and loose connections, and this in turn resulted in fires. Builders then
became wary of using the wire, and many jurisdictions outlawed its use in very small sizes in new

324
construction. Eventually, newer fixtures were introduced with connections designed to avoid loosening
and overheating. The first generation fixtures were marked "Al/Cu" and were ultimately found suitable
only for copper-clad aluminium wire, but the second generation fixtures, which bear a "CO/ALR" coding,
are rated for unclad aluminium wire. To adapt older assemblies, workers forestall the heating problem
using a properly done crimp of the aluminium wire to a short "pigtail" of copper wire. Today, new alloys,
designs, and methods are used for aluminium wiring in combination with aluminium termination.

History

The statue of the Anteros (commonly mistaken for either The Angel of Christian Charity or Eros) in
Piccadilly Circus London, was made in 1893 and is one of the first statues to be cast in aluminium.

Ancient Greeks and Romans used aluminium salts as dyeing mordants and as astringents for dressing
wounds; alum is still used as a styptic. In 1761 Guyton de Morveau suggested calling the base alum
alumine. In 1808, Humphry Davy identified the existence of a metal base of alum, which he at first
termed alumium and later aluminum (see Etymology section, below).

The metal was first produced in 1825 (in an impure form) by Danish physicist and chemist Hans Christian
Ørsted. He reacted anhydrous aluminium chloride with potassium amalgam and yielded a lump of metal
looking similar to tin.[39] Friedrich Wöhler was aware of these experiments and cited them, but after
redoing the experiments of Ørsted he concluded that this metal was pure potassium. He conducted a
similar experiment in 1827 by mixing anhydrous aluminium chloride with potassium and yielded
aluminium.[39] Wöhler is generally credited with isolating aluminium (Latin alumen, alum), but also
Ørsted can be listed as its discoverer.[40] Further, Pierre Berthier discovered aluminium in bauxite ore and
successfully extracted it.[41] Frenchman Henri Etienne Sainte-Claire Deville improved Wöhler's method in
1846, and described his improvements in a book in 1859, chief among these being the substitution of
sodium for the considerably more expensive potassium. [42] Deville likely also conceived the idea of the

325
electrolysis of aluminium oxide dissolved in cryolite; Charles Martin Hall and Paul Héroult might have
developed the more practical process after Deville.

Before the Hall-Héroult process was developed, aluminium was exceedingly difficult to extract from its
various ores. This made pure aluminium more valuable than gold.[43] Bars of aluminium were exhibited at
the Exposition Universelle of 1855,[44] and Napoleon III was said[citation needed] to have reserved a set of
aluminium dinner plates for his most honoured guests.

Aluminium was selected as the material to be used for the apex of the Washington Monument in 1884, a
time when one ounce (30 grams) cost the daily wage of a common worker on the project; [45] aluminium
was about the same value as silver.

The Cowles companies supplied aluminium alloy in quantity in the United States and England using
smelters like the furnace of Carl Wilhelm Siemens by 1886.[46] Charles Martin Hall of Ohio in the U.S.
and Paul Héroult of France independently developed the Hall-Héroult electrolytic process that made
extracting aluminium from minerals cheaper and is now the principal method used worldwide. The Hall-
Heroult process cannot produce Super Purity Aluminium directly. Hall's process,[47] in 1888 with the
financial backing of Alfred E. Hunt, started the Pittsburgh Reduction Company today known as Alcoa.
Héroult's process was in production by 1889 in Switzerland at Aluminium Industrie, now Alcan, and at
British Aluminium, now Luxfer Group and Alcoa, by 1896 in Scotland.[48]

By 1895 the metal was being used as a building material as far away as Sydney, Australia in the dome of
the Chief Secretary's Building.

Many navies have used an aluminium superstructure for their vessels; the 1975 fire aboard USS Belknap
that gutted her aluminium superstructure, as well as observation of battle damage to British ships during
the Falklands War, led to many navies switching to all steel superstructures. The Arleigh Burke class was
the first such U.S. ship, being constructed entirely of steel.

In 2008 the price of aluminium peaked at $1.45/lb in July but dropped to $0.70/lb by December.[49]

Etymology

Nomenclature history

The earliest citation given in the Oxford English Dictionary for any word used as a name for this element
is alumium, which British chemist and inventor Humphry Davy employed in 1808 for the metal he was
trying to isolate electrolytically from the mineral alumina. The citation is from the journal Philosophical
Transactions of the Royal Society of London: "Had I been so fortunate as to have obtained more certain
evidences on this subject, and to have procured the metallic substances I was in search of, I should have
proposed for them the names of silicium, alumium, zirconium, and glucium."[50][51]

Davy settled on aluminum by the time he published his 1812 book Chemical Philosophy: "This substance
appears to contain a peculiar metal, but as yet Aluminum has not been obtained in a perfectly free state,
though alloys of it with other metalline substances have been procured sufficiently distinct to indicate the
probable nature of alumina."[52] But the same year, an anonymous contributor to the Quarterly Review, a
British political-literary journal, in a review of Davy's book, objected to aluminum and proposed the name
aluminium, "for so we shall take the liberty of writing the word, in preference to aluminum, which has a
less classical sound."[53]

326
The -ium suffix conformed to the precedent set in other newly discovered elements of the time:
potassium, sodium, magnesium, calcium, and strontium (all of which Davy isolated himself).
Nevertheless, -um spellings for elements were not unknown at the time, as for example platinum, known
to Europeans since the sixteenth century, molybdenum, discovered in 1778, and tantalum, discovered in
1802. The -um suffix is consistent with the universal spelling alumina for the oxide, as lanthana is the
oxide of lanthanum, and magnesia, ceria, and thoria are the oxides of magnesium, cerium, and thorium
respectively.

The spelling used throughout the 19th century by most U.S. chemists ended in -ium, but common usage is
less clear.[54] The -um spelling is used in the Webster's Dictionary of 1828. In his advertising handbill for
his new electrolytic method of producing the metal 1892, Charles Martin Hall used the -um spelling,
despite his constant use of the -ium spelling in all the patents[47] he filed between 1886 and 1903.[55] It has
consequently been suggested that the spelling reflects an easier to pronounce word with one fewer
syllable, or that the spelling on the flier was a mistake. Hall's domination of production of the metal
ensured that the spelling aluminum became the standard in North America; the Webster Unabridged
Dictionary of 1913, though, continued to use the -ium version.

In 1926, the American Chemical Society officially decided to use aluminum in its publications; American
dictionaries typically label the spelling aluminium as a British variant.

The name "aluminium" derives from its status as a base of alum. "Alum" in turn is a Latin word that
literally means "bitter salt".[56]

Present-day spelling

Most countries use the spelling aluminium (with an i before -um). In the United States, this spelling is
largely unknown, and the spelling aluminum predominates.[57][58] The Canadian Oxford Dictionary prefers
aluminum, whereas the Australian Macquarie Dictionary prefers aluminium.

The International Union of Pure and Applied Chemistry (IUPAC) adopted aluminium as the standard
international name for the element in 1990, but three years later recognized aluminum as an acceptable
variant. Hence their periodic table includes both.[59] IUPAC prefers the use of aluminium in its internal
publications, although nearly as many IUPAC publications use the spelling aluminum.[60]

Health concerns

NFPA 704

327
Fire
diamond for
aluminium
shot

Despite its natural abundance, aluminium has no known function in living cells and presents some toxic
effects in elevated concentrations. Its toxicity can be traced to deposition in bone and the central nervous
system, which is particularly increased in patients with reduced renal function. Because aluminium
competes with calcium for absorption, increased amounts of dietary aluminium may contribute to the
reduced skeletal mineralization (osteopenia) observed in preterm infants and infants with growth
retardation. In very high doses, aluminium can cause neurotoxicity, and is associated with altered function
of the blood-brain barrier.[61] A small percentage of people are allergic to aluminium and experience
contact dermatitis, digestive disorders, vomiting or other symptoms upon contact or ingestion of products
containing aluminium, such as deodorants or antacids. In those without allergies, aluminium is not as
toxic as heavy metals, but there is evidence of some toxicity if it is consumed in excessive amounts. [62]
Although the use of aluminium cookware has not been shown to lead to aluminium toxicity in general,
excessive consumption of antacids containing aluminium compounds and excessive use of aluminium-
containing antiperspirants provide more significant exposure levels. Studies have shown that consumption
of acidic foods or liquids with aluminium significantly increases aluminium absorption,[63] and maltol has
been shown to increase the accumulation of aluminium in nervous and osseus tissue.[64] Furthermore,
aluminium increases estrogen-related gene expression in human breast cancer cells cultured in the
laboratory.[65] The estrogen-like effects of these salts have led to their classification as a metalloestrogen.

Because of its potentially toxic effects, aluminium's use in some antiperspirants, dyes (such as aluminium
lake), and food additives is controversial. Although there is little evidence that normal exposure to
aluminium presents a risk to healthy adults,[66] several studies point to risks associated with increased
exposure to the metal.[67] Aluminium in food may be absorbed more than aluminium from water.[68] Some
researchers have expressed concerns that the aluminium in antiperspirants may increase the risk of breast
cancer,[69] and aluminium has controversially been implicated as a factor in Alzheimer's disease.[70] The
Camelford water pollution incident involved a number of people consuming aluminium sulfate.
Investigations of the long-term health effects are still ongoing, but elevated brain aluminium
concentrations have been found in post-mortem examinations of victims who have later died, and further
research to determine if there is a link with cerebral amyloid angiopathy has been commissioned.[71]

According to The Alzheimer's Society, the overwhelming medical and scientific opinion is that studies
have not convincingly demonstrated a causal relationship between aluminium and Alzheimer's disease. [72]
Nevertheless, some studies, such as those on the PAQUID cohort,[73] cite aluminium exposure as a risk
factor for Alzheimer's disease. Some brain plaques have been found to contain increased levels of the
metal.[74] Research in this area has been inconclusive; aluminium accumulation may be a consequence of
the disease rather than a causal agent. In any event, if there is any toxicity of aluminium, it must be via a
very specific mechanism, since total human exposure to the element in the form of naturally occurring
clay in soil and dust is enormously large over a lifetime.[75][76] Scientific consensus does not yet exist
about whether aluminium exposure could directly increase the risk of Alzheimer's disease. [72]

Effect on plants

Aluminium is primary among the factors that reduce plant growth on acid soils. Although it is generally
harmless to plant growth in pH-neutral soils, the concentration in acid soils of toxic Al3+ cations increases
and disturbs root growth and function.[77][78][79]

328
Most acid soils are saturated with aluminium rather than hydrogen ions. The acidity of the soil is therefore
a result of hydrolysis of aluminium compounds.[80] This concept of "corrected lime potential"[81] to define
the degree of base saturation in soils became the basis for procedures now used in soil testing laboratories
to determine the "lime requirement"[82] of soils.[83]

Wheat's adaptation to allow aluminium tolerance is such that the aluminium induces a release of organic
compounds that bind to the harmful aluminium cations. Sorghum is believed to have the same tolerance
mechanism. The first gene for aluminium tolerance has been identified in wheat. It was shown that
sorghum's aluminium tolerance is controlled by a single gene, as for wheat.[84] This is not the case in all
plants.

Gold

From Wikipedia, the free encyclopedia

Jump to: navigation, search

This article is about the metal. For the color, see Gold (color). For other uses, see Gold
(disambiguation).

platinum ← gold → mercury

Ag

Au

Rg

79Au

Periodic table

Appearance

metallic yellow

329
General properties

Name, symbol, number gold, Au, 79

Pronunciation /ˈɡoʊld/

Element category transition metal

Group, period, block 11, 6, d

Standard atomic weight 196.966569g·mol−1

Electron configuration [Xe] 4f14 5d10 6s1

Electrons per shell 2, 8, 18, 32, 18, 1 (Image)

Physical properties

Phase solid

Density (near r.t.) 19.30 g·cm−3

Liquid density at m.p. 17.31 g·cm−3

Melting point 1337.33 K1064.18 ° ,C1947.52 ° ,F

Boiling point 3129 K2856 ° ,C5173 ° ,F

Heat of fusion 12.55 kJ·mol−1

Heat of vaporization 324 kJ·mol−1

330
Specific heat capacity (25 °C) 25.418 J·mol−1·K−1

Vapor pressure

P (Pa) 1 10 100 1k 10 k 100 k

at T (K) 1646 1814 2021 2281 2620 3078

Atomic properties

-1, 1, 2, 3, 4, 5
Oxidation states
(amphoteric oxide)

Electronegativity 2.54 (Pauling scale)

Ionization energies 1st: 890.1 kJ·mol−1

2nd: 1980 kJ·mol−1

Atomic radius 144 pm

Covalent radius 136±6 pm

Van der Waals radius 166 pm

Miscellanea

Crystal structure Lattice face centered cubic

Magnetic ordering diamagnetic

Electrical resistivity (20 °C) 22.14 nΩ·m

Thermal conductivity (300 K) 318 W·m−1·K−1

Thermal expansion (25 °C) 14.2 µm·m−1·K−1

Speed of sound (thin rod) (r.t.) 2030 m·s−1

Tensile strength 120 MPa

331
Young's modulus 79 GPa

Shear modulus 27 GPa

Bulk modulus 180 GPa

Poisson ratio 0.44

Mohs hardness 2.5

Vickers hardness 216 MPa

Brinell hardness 25 HB MPa

CAS registry number 7440-57-5

Most stable isotopes

Main article: Isotopes of gold

iso NA half-life DM DE (MeV) DP


195
Au syn 186.10 d ε 0.227 195
Pt

ε 1.506 196
Pt
196
Au syn 6.183 d
β− 0.686 196
Hg
197 197
Au 100% Au is stable with 118 neutrons

198
Au syn 2.69517 d β− 1.372 198
Hg

199
Au syn 3.169 d β− 0.453 199
Hg

v·d·e

Gold ( /ˈɡoʊld/) is a chemical element with the symbol Au (from Latin: aurum "gold", originally
"shining dawn") and an atomic number of 79. It has been a highly sought-after precious metal for
coinage, jewelry, and other arts since the beginning of recorded history. The native metal occurs as
nuggets or grains in rocks, in veins and in alluvial deposits. Less commonly, it occurs in minerals as gold
compounds, usually with tellurium. Gold metal is dense, soft, shiny and the most malleable and ductile
pure metal known. Pure gold has a bright yellow color and luster traditionally considered attractive,
which it maintains without oxidizing in air or water. Gold is one of the coinage metals and has served as a

332
symbol of wealth and a store of value throughout history. Gold standards have provided a basis for
monetary policies. It also has been linked to a variety of symbolisms and ideologies.

A total of 165,000 tonnes of gold have been mined in human history, as of 2009.[1] This is roughly
equivalent to 5.3 billion troy ounces or, in terms of volume, about 8,500 m³, or a cube 20.4 m on a side.
The world consumption of new gold produced is about 50% in jewelry, 40% in investments, and 10% in
industry.

Although primarily used as a store of value, gold has many modern industrial uses including dentistry and
electronics. Gold has traditionally found use because of its good resistance to oxidative corrosion and
excellent quality as a conductor of electricity.

Chemically, gold is a transition metal. Compared with other metals, pure gold is chemically least reactive,
resisting individual acids but being attacked by the acid mixture aqua regia, so named because it desolves
gold. Gold also dissolves in alkaline solutions of cyanide, which have been used in mining. Gold
dissolves in mercury, forming amalgam alloys. Gold is insoluble in nitric acid, which dissolves silver and
base metals, a property that has long been used to confirm the presence of gold in items, and this is the
origin of the colloquial term "acid test", referring to a gold standard test for genuine value.

Characteristics

Gold is the most malleable and ductile of all metals; a single gram can be beaten into a sheet of 1 square
meter, or an ounce into 300 square feet. Gold leaf can be beaten thin enough to become translucent. The
transmitted light appears greenish blue, because gold strongly reflects yellow and red. [2] Such semi-
transparent sheets also strongly reflect infrared light, making them useful as infrared (radiant heat) shields
in visors of heat-resistant suits, and in sun-visors for spacesuits.[3]

Gold readily creates alloys with many other metals. These alloys can be produced to modify the hardness
and other metallurgical properties, to control melting point or to create exotic colors (see below).[4] Gold
is a good conductor of heat and electricity and reflects infrared radiation strongly. Chemically, it is
unaffected by air, moisture and most corrosive reagents, and is therefore well suited for use in coins and
jewelry and as a protective coating on other, more reactive, metals. However, it is not chemically inert.

Common oxidation states of gold include +1 (gold(I) or aurous compounds) and +3 (gold(III) or auric
compounds). Gold ions in solution are readily reduced and precipitated out as gold metal by adding any
other metal as the reducing agent. The added metal is oxidized and dissolves allowing the gold to be
displaced from solution and be recovered as a solid precipitate.

High quality pure metallic gold is tasteless and scentless; in keeping with its resistance to corrosion (it is
metal ions which confer taste to metals).[5]

In addition, gold is very dense, a cubic meter weighing 19,300 kg. By comparison, the density of lead is
11,340 kg/m3, and that of the densest element, osmium, is 22,610 kg/m3.

Color

Whereas most other pure metals are gray or silvery white, gold is yellow. This color is determined by the
density of loosely bound (valence) electrons; those electrons oscillate as a collective "plasma" medium
described in terms of a quasiparticle called plasmon. The frequency of these oscillations lies in the
ultraviolet range for most metals, but it falls into the visible range for gold due to subtle relativistic effects

333
that affect the orbitals around gold atoms.[6][7] Similar effects impart a golden hue to metallic cesium (see
relativistic quantum chemistry).

Common colored gold alloys such as rose gold can be created by the addition of various amounts of
copper and silver, as indicated in the triangular diagram to the left. Alloys containing palladium or nickel
are also important in commercial jewelry as these produce white gold alloys. Less commonly, addition of
manganese, aluminium, iron, indium and other elements can produce more unusual colors of gold for
various applications.[4]

Isotopes

Main article: Isotopes of gold

Gold has only one stable isotope, 197Au, which is also its only naturally occurring isotope. Thirty six
radioisotopes have been synthesized ranging in atomic mass from 169 to 205. The most stable of these is
195
Au with a half-life of 186.1 days. The least stable is 171Au, which decays by proton emission with a
half-life of 30 µs. Most of gold's radioisotopes with atomic masses below 197 decay by some
combination of proton emission, α decay, and β+ decay. The exceptions are 195Au, which decays by
electron capture, and 196Au, which decays most often by electron capture (93%) with a minor β- decay
path (7%).[8] All of gold's radioisotopes with atomic masses above 197 decay by β- decay.[9]

At least 32 nuclear isomers have also been characterized, ranging in atomic mass from 170 to 200. Within
that range, only 178Au, 180Au, 181Au, 182Au, and 188Au do not have isomers. Gold's most stable isomer is
198 m2
Au with a half-life of 2.27 days. Gold's least stable isomer is 177 m2Au with a half-life of only 7 ns.
184 m1
Au has three decay paths: β+ decay, isomeric transition, and alpha decay. No other isomer or isotope
of gold has three decay paths.[9]

Native gold nuggetsNative gold + gold on quartz. Different colors of Ag-Au-Cu alloys

Use and applications

Monetary exchange

Gold has been widely used throughout the world as a vehicle for monetary exchange, either by issuance
and recognition of gold coins or other bare metal quantities, or through gold-convertible paper
instruments by establishing gold standards in which the total value of issued money is represented in a
store of gold reserves.

However, production has not grown in relation to the world's economies. Today, gold mining output is
declining.[10] With the sharp growth of economies in the 20th century, and increasing foreign exchange,
the world's gold reserves and their trading market have become a small fraction of all markets and fixed

334
exchange rates of currencies to gold were no longer sustained. At the beginning of World War I the
warring nations moved to a fractional gold standard, inflating their currencies to finance the war effort.
After World War II gold was replaced by a system of convertible currency following the Bretton Woods
system. Gold standards and the direct convertibility of currencies to gold have been abandoned by world
governments, being replaced by fiat currency in their stead. Switzerland was the last country to tie its
currency to gold; it backed 40% of its value until the Swiss joined the International Monetary Fund in
1999.[11]

Pure gold is too soft for day-to-day monetary use and is typically hardened by alloying with copper, silver
or other base metals. The gold content of alloys is measured in carats (k). Pure gold is designated as 24k.
English gold coins intended for circulation from 1526 into the 1930s were typically a standard 22k alloy
called crown gold, for hardness (American gold coins for circulation after 1837 contained the slightly
lower amount of 0.900 fine gold, or 21.6 kt).

Investment

Main article: Gold as an investment

Many holders of gold store it in form of bullion coins or bars as a hedge against inflation or other
economic disruptions. However, some economists do not believe gold serves as a hedge against inflation
or currency depreciation.[12]

The ISO 4217 currency code of gold is XAU.

Modern bullion coins for investment or collector purposes do not require good mechanical wear
properties; they are typically fine gold at 24k, although the American Gold Eagle, the British gold
sovereign, and the South African Krugerrand continue to be minted in 22k metal in historical tradition.
The special issue Canadian Gold Maple Leaf coin contains the highest purity gold of any bullion coin, at
99.999% or 0.99999, while the popular issue Canadian Gold Maple Leaf coin has a purity of 99.99%.
Several other 99.99% pure gold coins are available. In 2006, the United States Mint began production of
the American Buffalo gold bullion coin with a purity of 99.99%. The Australian Gold Kangaroos were
first coined in 1986 as the Australian Gold Nugget but changed the reverse design in 1989. Other popular
modern coins include the Austrian Vienna Philharmonic bullion coin and the Chinese Gold Panda.

Jewelry

Main article: Jewellery

335
Moche gold necklace depicting feline heads. Larco Museum Collection. Lima-Peru

Because of the softness of pure (24k) gold, it is usually alloyed with base metals for use in jewelry,
altering its hardness and ductility, melting point, color and other properties. Alloys with lower caratage,
typically 22k, 18k, 14k or 10k, contain higher percentages of copper, or other base metals or silver or
palladium in the alloy. Copper is the most commonly used base metal, yielding a redder color. Eighteen-
carat gold containing 25% copper is found in antique and Russian jewelry and has a distinct, though not
dominant, copper cast, creating rose gold. Fourteen-carat gold-copper alloy is nearly identical in color to
certain bronze alloys, and both may be used to produce police and other badges. Blue gold can be made
by alloying with iron and purple gold can be made by alloying with aluminium, although rarely done
except in specialized jewelry. Blue gold is more brittle and therefore more difficult to work with when
making jewelry. Fourteen and eighteen carat gold alloys with silver alone appear greenish-yellow and are
referred to as green gold. White gold alloys can be made with palladium or nickel. White 18-carat gold
containing 17.3% nickel, 5.5% zinc and 2.2% copper is silvery in appearance. Nickel is toxic, however,
and its release from nickel white gold is controlled by legislation in Europe. Alternative white gold alloys
are available based on palladium, silver and other white metals,[13] but the palladium alloys are more
expensive than those using nickel. High-carat white gold alloys are far more resistant to corrosion than
are either pure silver or sterling silver. The Japanese craft of Mokume-gane exploits the color contrasts
between laminated colored gold alloys to produce decorative wood-grain effects.

Medicine

In medieval times, gold was often seen as beneficial for the health, in the belief that something that rare
and beautiful could not be anything but healthy. Even some modern esotericists and forms of alternative
medicine assign metallic gold a healing power.[14] Some gold salts do have anti-inflammatory properties
and are used as pharmaceuticals in the treatment of arthritis and other similar conditions. [15] However,
only salts and radioisotopes of gold are of pharmacological value, as elemental (metallic) gold is inert to
all chemicals it encounters inside the body. In modern times, injectable gold has been proven to help to
reduce the pain and swelling of rheumatoid arthritis and tuberculosis.[15][16]

Gold alloys are used in restorative dentistry, especially in tooth restorations, such as crowns and
permanent bridges. The gold alloys' slight malleability facilitates the creation of a superior molar mating
surface with other teeth and produces results that are generally more satisfactory than those produced by
the creation of porcelain crowns. The use of gold crowns in more prominent teeth such as incisors is
favored in some cultures and discouraged in others.

Colloidal gold preparations (suspensions of gold nanoparticles) in water are intensely red-colored, and
can be made with tightly controlled particle sizes up to a few tens of nanometers across by reduction of
gold chloride with citrate or ascorbate ions. Colloidal gold is used in research applications in medicine,
biology and materials science. The technique of immunogold labeling exploits the ability of the gold
particles to adsorb protein molecules onto their surfaces. Colloidal gold particles coated with specific
antibodies can be used as probes for the presence and position of antigens on the surfaces of cells. [17] In
ultrathin sections of tissues viewed by electron microscopy, the immunogold labels appear as extremely
dense round spots at the position of the antigen.[18] Colloidal gold is also the form of gold used as gold
paint on ceramics prior to firing.

Gold, or alloys of gold and palladium, are applied as conductive coating to biological specimens and other
non-conducting materials such as plastics and glass to be viewed in a scanning electron microscope. The
coating, which is usually applied by sputtering with an argon plasma, has a triple role in this application.
Gold's very high electrical conductivity drains electrical charge to earth, and its very high density

336
provides stopping power for electrons in the electron beam, helping to limit the depth to which the
electron beam penetrates the specimen. This improves definition of the position and topography of the
specimen surface and increases the spatial resolution of the image. Gold also produces a high output of
secondary electrons when irradiated by an electron beam, and these low-energy electrons are the most
commonly used signal source used in the scanning electron microscope.[19]

The isotope gold-198, (half-life 2.7 days) is used in some cancer treatments and for treating other
diseases.[20]

Food and drink

 Gold can be used in food and has the E number 175.[21]


 Gold leaf, flake or dust is used on and in some gourmet foods, notably sweets and drinks as
decorative ingredient.[22] Gold flake was used by the nobility in Medieval Europe as a decoration
in food and drinks, in the form of leaf, flakes or dust, either to demonstrate the host's wealth or in
the belief that something that valuable and rare must be beneficial for one's health.
 Danziger Goldwasser (German: Gold water of Danzig) or Goldwasser (English: Goldwater) is a
traditional German herbal liqueur[23] produced in what is today Gdańsk, Poland, and Schwabach,
Germany, and contains flakes of gold leaf. There are also some expensive (~$1000) cocktails
which contain flakes of gold leaf.[24] However, since metallic gold is inert to all body chemistry, it
adds no taste nor has it any other nutritional effect and leaves the body unaltered.[25]

Industry

The 220 kg gold brick displayed in Jinguashi Gold Museum, Taiwan, Republic of China

337
The world's largest gold bar weighs 250 kg. Toi museum, Japan.

A gold nugget of 5 mm in diameter (bottom) can be expanded through hammering into a gold foil of
about 0.5 square meter. Toi museum, Japan.

 Gold solder is used for joining the components of gold jewelry by high-temperature hard
soldering or brazing. If the work is to be of hallmarking quality, gold solder must match the carat
weight of the work, and alloy formulas are manufactured in most industry-standard carat weights
to color match yellow and white gold. Gold solder is usually made in at least three melting-point
ranges referred to as Easy, Medium and Hard. By using the hard, high-melting point solder first,
followed by solders with progressively lower melting points, goldsmiths can assemble complex
items with several separate soldered joints.
 Gold can be made into thread and used in embroidery.
 Gold is ductile and malleable, meaning it can be drawn into very thin wire and can be beaten into
very thin sheets known as gold leaf.
 Gold produces a deep, intense red color when used as a coloring agent in cranberry glass.
 In photography, gold toners are used to shift the color of silver bromide black and white prints
towards brown or blue tones, or to increase their stability. Used on sepia-toned prints, gold toners

338
produce red tones. Kodak published formulas for several types of gold toners, which use gold as
the chloride.[26]
 As gold is a good reflector of electromagnetic radiation such as infrared and visible light as well
as radio waves, it is used for the protective coatings on many artificial satellites, in infrared
protective faceplates in thermal protection suits and astronauts' helmets and in electronic warfare
planes like the EA-6B Prowler.
 Gold is used as the reflective layer on some high-end CDs.
 Automobiles may use gold for heat dissipation. McLaren uses gold foil in the engine
compartment of its F1 model.[27]
 Gold can be manufactured so thin that it appears transparent. It is used in some aircraft cockpit
windows for de-icing or anti-icing by passing electricity through it. The heat produced by the
resistance of the gold is enough to deter ice from forming.[28]

Electronics

The concentration of free electrons in gold metal is 5.90×1022 cm−3. Gold is highly conductive to
electricity, and has been used for electrical wiring in some high-energy applications (only silver and
copper are more conductive per volume, but gold has the advantage of corrosion resistance). For example,
gold electrical wires were used during some of the Manhattan Project's atomic experiments, but large high
current silver wires were used in the calutron isotope separator magnets in the project.

Though gold is attacked by free chlorine, its good conductivity and general resistance to oxidation and
corrosion in other environments (including resistance to non-chlorinated acids) has led to its widespread
industrial use in the electronic era as a thin layer coating electrical connectors of all kinds, thereby
ensuring good connection. For example, gold is used in the connectors of the more expensive electronics
cables, such as audio, video and USB cables. The benefit of using gold over other connector metals such
as tin in these applications is highly debated. Gold connectors are often criticized by audio-visual experts
as unnecessary for most consumers and seen as simply a marketing ploy. However, the use of gold in
other applications in electronic sliding contacts in highly humid or corrosive atmospheres, and in use for
contacts with a very high failure cost (certain computers, communications equipment, spacecraft, jet
aircraft engines) remains very common.[29]

Besides sliding electrical contacts, gold is also used in electrical contacts because of its resistance to
corrosion, electrical conductivity, ductility and lack of toxicity.[30] Switch contacts are generally subjected
to more intense corrosion stress than are sliding contacts. Fine gold wires are used to connect
semiconductor devices to their packages through a process known as wire bonding.

Commercial chemistry

Gold is attacked by and dissolves in alkaline solutions of potassium or sodium cyanide, to form the salt
gold cyanide—a technique that has been used in extracting metallic gold from ores in the cyanide process.
Gold cyanide is the electrolyte used in commercial electroplating of gold onto base metals and
electroforming.

Gold chloride (chloroauric acid) solutions are used to make colloidal gold by reduction with citrate or
ascorbate ions. Gold chloride and gold oxide are used to make highly valued cranberry or red-colored
glass, which, like colloidal gold suspensions, contains evenly sized spherical gold nanoparticles.[31]

History

339
The Turin Papyrus Map

Funerary mask of Tutankhamun

Jason returns with the golden fleece on an Apulian red-figure calyx krater, ca. 340–330 BC.

340
Gold has been known and used by artisans since the Chalcolithic. Gold artifacts in the Balkans appear
from the 4th millennium BC, such as that found in the Varna Necropolis. Gold artifacts such as the
golden hats and the Nebra disk appeared in Central Europe from the 2nd millennium BC Bronze Age.

Egyptian hieroglyphs from as early as 2600 BC describe gold, which king Tushratta of the Mitanni
claimed was "more plentiful than dirt" in Egypt.[32] Egypt and especially Nubia had the resources to make
them major gold-producing areas for much of history. The earliest known map is known as the Turin
Papyrus Map and shows the plan of a gold mine in Nubia together with indications of the local geology.
The primitive working methods are described by both Strabo and Diodorus Siculus, and included fire-
setting. Large mines were also present across the Red Sea in what is now Saudi Arabia.

The legend of the golden fleece may refer to the use of fleeces to trap gold dust from placer deposits in
the ancient world. Gold is mentioned frequently in the Old Testament, starting with Genesis 2:11 (at
Havilah) and is included with the gifts of the magi in the first chapters of Matthew New Testament. The
Book of Revelation 21:21 describes the city of New Jerusalem as having streets "made of pure gold, clear
as crystal". The south-east corner of the Black Sea was famed for its gold. Exploitation is said to date
from the time of Midas, and this gold was important in the establishment of what is probably the world's
earliest coinage in Lydia around 610 BC.[33] From the 6th or 5th century BC, the Chu (state) circulated the
Ying Yuan, one kind of square gold coin.

In Roman metallurgy, new methods for extracting gold on a large scale were developed by introducing
hydraulic mining methods, especially in Hispania from 25 BC onwards and in Dacia from 106 AD
onwards. One of their largest mines was at Las Medulas in León (Spain), where seven long aqueducts
enabled them to sluice most of a large alluvial deposit. The mines at Roşia Montană in Transylvania were
also very large, and until very recently, still mined by opencast methods. They also exploited smaller
deposits in Britain, such as placer and hard-rock deposits at Dolaucothi. The various methods they used
are well described by Pliny the Elder in his encyclopedia Naturalis Historia written towards the end of the
first century AD.

The Mali Empire in Africa was famed throughout the old world for its large amounts of gold. Mansa
Musa, ruler of the empire (1312–1337) became famous throughout the old world for his great hajj to
Mecca in 1324. When he passed through Cairo in July 1324, he was reportedly accompanied by a camel
train that included thousands of people and nearly a hundred camels. He gave away so much gold that it
depressed the price in Egypt for over a decade.[34] A contemporary Arab historian remarked:

Gold was at a high price in Egypt until they came in that year. The mithqal did not go below 25 dirhams
and was generally above, but from that time its value fell and it cheapened in price and has remained
cheap till now. The mithqal does not exceed 22 dirhams or less. This has been the state of affairs for about
twelve years until this day by reason of the large amount of gold which they brought into Egypt and spent
there [...]

—Chihab Al-Umari[35]

The European exploration of the Americas was fueled in no small part by reports of the gold ornaments
displayed in great profusion by Native American peoples, especially in Central America, Peru, Ecuador
and Colombia. The Aztecs regarded gold as literally the product of the gods, calling it "god excrement"
(teocuitlatl in Nahuatl).[36]

341
Although the price of some platinum group metals can be much higher, gold has long been considered the
most desirable of precious metals, and its value has been used as the standard for many currencies (known
as the gold standard) in history. Gold has been used as a symbol for purity, value, royalty, and particularly
roles that combine these properties. Gold as a sign of wealth and prestige was ridiculed by Thomas More
in his treatise Utopia. On that imaginary island, gold is so abundant that it is used to make chains for
slaves, tableware and lavatory-seats. When ambassadors from other countries arrive, dressed in
ostentatious gold jewels and badges, the Utopians mistake them for menial servants, paying homage
instead to the most modestly dressed of their party.

There is an age-old tradition of biting gold to test its authenticity. Although this is certainly not a
professional way of examining gold, the bite test should score the gold because gold is a soft metal, as
indicated by its score on the Mohs' scale of mineral hardness. The purer the gold the easier it should be to
mark it. Painted lead can cheat this test because lead is softer than gold (and may invite a small risk of
lead poisoning if sufficient lead is absorbed by the biting).

Gold in antiquity was relatively easy to obtain geologically; however, 75% of all gold ever produced has
been extracted since 1910.[37] It has been estimated that all gold ever refined would form a single cube
20 m (66 ft) on a side (equivalent to 8000 m3).[37]

One main goal of the alchemists was to produce gold from other substances, such as lead — presumably
by the interaction with a mythical substance called the philosopher's stone. Although they never
succeeded in this attempt, the alchemists promoted an interest in what can be done with substances, and
this laid a foundation for today's chemistry. Their symbol for gold was the circle with a point at its center
(☉), which was also the astrological symbol and the ancient Chinese character for the Sun. For modern
creation of artificial gold by neutron capture, see gold synthesis.

During the 19th century, gold rushes occurred whenever large gold deposits were discovered. The first
documented discovery of gold in the United States was at the Reed Gold Mine near Georgeville, North
Carolina in 1803.[38] The first major gold strike in the United States occurred in a small north Georgia
town called Dahlonega.[39] Further gold rushes occurred in California, Colorado, the Black Hills, Otago in
New Zealand, Australia, Witwatersrand in South Africa, and the Klondike in Canada.

Because of its historically high value, much of the gold mined throughout history is still in circulation in
one form or another.

Occurrence

342
This 156-ounce (4.85 kg) nugget was found by an individual prospector in the Southern California Desert
using a metal detector.

Gold's atomic number of 79 makes it one of the higher atomic number elements which occur naturally.
Like all elements with atomic numbers larger than iron, gold is thought to have been formed from a
supernova nucleosynthesis process. Their explosions scattered metal-containing dusts (including heavy
elements like gold) into the region of space in which they later condensed into our solar system and the
Earth.[40]

On Earth, whenever elemental gold occurs, it appears most often as a metal solid solution of gold with
silver, i.e. a gold silver alloy. Such alloys usually have a silver content of 8–10%. Electrum is elemental
gold with more than 20% silver. Electrum's color runs from golden-silvery to silvery, dependent upon the
silver content. The more silver, the lower the specific gravity.

Relative sizes of an 860 kg block of gold ore, and the 30 g of gold that can be extracted from it. Toi gold
mine, Japan.

Gold left behind after a pyrite cube was oxidized to hematite. Note cubic shape of cavity.

Gold is found in ores made up of rock with very small or microscopic particles of gold. This gold ore is
often found together with quartz or sulfide minerals such as Fool's Gold, which is a pyrite.[41] These are
called lode deposits. Native gold is also found in the form of free flakes, grains or larger nuggets that
have been eroded from rocks and end up in alluvial deposits (called placer deposits). Such free gold is
always richer at the surface of gold-bearing veins owing to the oxidation of accompanying minerals

343
followed by weathering, and washing of the dust into streams and rivers, where it collects and can be
welded by water action to form nuggets.

Gold sometimes occurs combined with tellurium as the minerals calaverite, krennerite, nagyagite, petzite
and sylvanite, and as the rare bismuthide maldonite (Au2Bi) and antimonide aurostibite (AuSb2). Gold
also occurs in rare alloys with copper, lead, and mercury: the minerals auricupride (Cu3Au), novodneprite
(AuPb3) and weishanite ((Au, Ag)3Hg2).

Recent research suggests that microbes can sometimes play an important role in forming gold deposits,
transporting and precipitating gold to form grains and nuggets that collect in alluvial deposits.[42]

The world's oceans contain gold. Measured concentrations of gold in the Atlantic and Northeast Pacific
are 50–150 fmol/L or 10-30 parts per quadrillion. In general, Au concentrations for Atlantic and Pacific
samples are the same (~50 fmol/L) but less certain. Mediterranean deep waters contain higher
concentrations of Au (100–150 fmol/L) attributed to wind-blown dust and/or rivers. At 10 parts per
quadrillion the Earth's oceans would hold 15,000 tons of gold.[43] These figures are three orders of
magnitude less than reported in the literature prior to 1988, indicating contamination problems with the
earlier data.

A number of people have claimed to be able to economically recover gold from sea water, but so far they
have all been either mistaken or crooks. A so-called reverend, Prescott Jernegan ran a gold-from-seawater
swindle in the United States in the 1890s. A British fraudster ran the same scam in England in the early
1900s.[44] Fritz Haber (the German inventor of the Haber process) did research on the extraction of gold
from sea water in an effort to help pay Germany's reparations following World War I.[45] Based on the
published values of 2 to 64 ppb of gold in seawater a commercially successful extraction seemed possible.
After analysis of 4000 water samples yielding an average of 0.004 ppb it became clear that the extraction
would not be possible and he stopped the project.[46] No commercially viable mechanism for performing
gold extraction from sea water has yet been identified. Gold synthesis is not economically viable and is
unlikely to become so in the foreseeable future

Gallery of specimens of crystalline native gold

"Rope gold" from Lena River,Crystalline gold from Mina Zapata, SantaGold leaf from Harvard Mine,
Sakha Republic, Russia. Size:Elena de Uairen, Venezuela. Size:Jamestown, California, USA.
2.5×1.2×0.7 cm. 3.7×1.1×0.4 cm. Size 9.3×3.2× >0.1 cm.

344
Production

Main articles: Gold prospecting, Gold mining, and Gold extraction

Gold output in 2005

The entrance to an underground gold mine in Victoria, Australia

Pure gold precipitate produced by the aqua regia refining process

Gold extraction is most economical in large, easily mined deposits. Ore grades as little as 0.5 mg/kg (0.5
parts per million, ppm) can be economical. Typical ore grades in open-pit mines are 1–5 mg/kg (1–
5 ppm); ore grades in underground or hard rock mines are usually at least 3 mg/kg (3 ppm). Because ore
grades of 30 mg/kg (30 ppm) are usually needed before gold is visible to the naked eye, in most gold
mines the gold is invisible.

345
Since the 1880s, South Africa has been the source for a large proportion of the world's gold supply, with
about 50% of all gold ever produced having come from South Africa. Production in 1970 accounted for
79% of the world supply, producing about 1,480 tonnes. 2008 production was 2,260 tonnes. In 2007
China (with 276 tonnes) overtook South Africa as the world's largest gold producer, the first time since
1905 that South Africa has not been the largest.[47]

The city of Johannesburg located in South Africa was founded as a result of the Witwatersrand Gold Rush
which resulted in the discovery of some of the largest gold deposits the world has ever seen. Gold fields
located within the basin in the Free State and Gauteng provinces are extensive in strike and dip requiring
some of the world's deepest mines, with the Savuka and TauTona mines being currently the world's
deepest gold mine at 3,777 m. The Second Boer War of 1899–1901 between the British Empire and the
Afrikaner Boers was at least partly over the rights of miners and possession of the gold wealth in South
Africa.

Other major producers are the United States, Australia, Russia and Peru. Mines in South Dakota and
Nevada supply two-thirds of gold used in the United States. In South America, the controversial project
Pascua Lama aims at exploitation of rich fields in the high mountains of Atacama Desert, at the border
between Chile and Argentina. Today about one-quarter of the world gold output is estimated to originate
from artisanal or small scale mining.[48]

After initial production, gold is often subsequently refined industrially by the Wohlwill process which is
based on electrolysis or by the Miller process, that is chlorination in the melt. The Wohlwill process
results in higher purity, but is more complex and is only applied in small-scale installations.[49][50] Other
methods of assaying and purifying smaller amounts of gold include parting and inquartation as well as
cupellation, or refining methods based on the dissolution of gold in aqua regia.[51]

At the end of 2009, it was estimated that all the gold ever mined totaled 165,000 tonnes [1] This can be
represented by a cube with an edge length of about 20.28 meters. The value of this is very limited; at
$1200 per ounce, 165,000 tons of gold would have a value of only 6.6 trillion dollars.

The average gold mining and extraction costs were about US$317/oz in 2007, but these can vary widely
depending on mining type and ore quality; global mine production amounted to 2,471.1 tonnes.[52]

Gold is so stable and so valuable that most of the gold used in manufactured goods, jewelry, and works of
art is eventually recovered and recycled. Some gold used in spacecraft and electronic equipment cannot
be profitably recovered, but it is generally used in these applications in the form of extremely thin layers
or extremely fine wires so that the total quantity used (and lost) is small compared to the total amount of
gold produced and stockpiled. Thus there is little true consumption of new gold in the economic sense;
the stock of gold remains essentially constant (at least in the modern world) while ownership shifts from
one party to another.[53] One estimate is that 85% of all the gold ever mined is still available in the world's
easily recoverable stocks, with 15% having been lost, or used in non-recyclable industrial uses.[54]

Consumption

The consumption of gold produced in the world is about 50% in jewelry, 40% in investments, and 10% in
industry.

India is the world's largest single consumer of gold, as Indians buy about 25% of the world's gold, [55]
purchasing approximately 800 tonnes of gold every year, mostly for jewelry. India is also the largest
importer gold; in 2008, India imported around 400 tonnes of gold.[56]

346
Chemistry

Gold (III) chloride solution in water

Although gold is a noble metal, it forms many and diverse compounds. The oxidation state of gold in its
compounds ranges from −1 to +5, but Au(I) and Au(III) dominate its chemistry. Au(I), referred to as the
aurous ion, is the most common oxidation state with soft ligands such as thioethers, thiolates, and tertiary
phosphines. Au(I) compounds are typically linear. A good example is Au(CN)2−, which is the soluble
form of gold encountered in mining. Curiously, aurous complexes of water are rare. The binary gold
halides, such as AuCl, form zigzag polymeric chains, again featuring linear coordination at Au. Most
drugs based on gold are Au(I) derivatives.[57]

Au(III) (auric) is a common oxidation state, and is illustrated by gold(III) chloride, Au2Cl6. The gold atom
centers in Au(III) complexes, like other d8 compounds, are typically square planar, with chemical bonds
that have both covalent and ionic character.

Aqua regia, a 1:3 mixture of nitric acid and hydrochloric acid, dissolves gold. Nitric acid oxidizes the
metal to +3 ions, but only in minute amounts, typically undetectable in the pure acid because of the
chemical equilibrium of the reaction. However, the ions are removed from the equilibrium by
hydrochloric acid, forming AuCl4− ions, or chloroauric acid, thereby enabling further oxidation.

Some free halogens react with gold.[58] Gold also reacts in alkaline solutions of potassium cyanide. With
mercury, it forms an amalgam.

Less common oxidation states

Less common oxidation states of gold include −1, +2, and +5.

The −1 oxidation state occurs in compounds containing the Au− anion, called aurides. Caesium auride
(CsAu), for example, crystallizes in the caesium chloride motif.[59] Other aurides include those of Rb+, K+,
and tetramethylammonium (CH3)4N+.[60]

Gold(II) compounds are usually diamagnetic with Au–Au bonds such as [Au(CH2)2P(C6H5)2]2Cl2. The
evaporation of a solution of Au(OH)3 in concentrated H2SO4 produces red crystals of gold(II) sulfate,
AuSO4. Originally thought to be a mixed-valence compound, it has been shown to contain Au4+

347
2 cations.[61][62] A noteworthy, legitimate gold(II) complex is the tetraxenonogold(II) cation, which
contains xenon as a ligand, found in [AuXe4](Sb2F11)2.[63]

Gold pentafluoride and its derivative anion, AuF−


6, is the sole example of gold(V), the highest verified oxidation state.[64]

Some gold compounds exhibit aurophilic bonding, which describes the tendency of gold ions to interact
at distances that are too long to be a conventional Au–Au bond but shorter that van der Waals bonding.
The interaction is estimated to be comparable in strength to that of a hydrogen bond.

Mixed valence compounds

Well-defined cluster compounds are numerous.[60] In such cases, gold has a fractional oxidation state. A
representative example is the octahedral species {Au(P(C6H5)3)}62+. Gold chalcogenides, such as gold
sulfide, feature equal amounts of Au(I) and Au(III).

Toxicity

Pure metallic (elemental) gold is non-toxic and non-irritating when ingested[65] and is sometimes used as a
food decoration in the form of gold leaf. Metallic gold is also a component of the alcoholic drinks
Goldschläger, Gold Strike, and Goldwasser. Metallic gold is approved as a food additive in the EU (E175
in the Codex Alimentarius). Although gold ion is toxic, the acceptance of metallic gold as a food additive
is due to its relative chemical inertness, and resistance to being corroded or transformed into soluble salts
(gold compounds) by any known chemical process which would be encountered in the human body.

Soluble compounds (gold salts) such as gold chloride are toxic to the liver and kidneys. Common cyanide
salts of gold such as potassium gold cyanide, used in gold electroplating, are toxic both by virtue of their
cyanide and gold content. There are rare cases of lethal gold poisoning from potassium gold cyanide.[66][67]
Gold toxicity can be ameliorated with chelation therapy with an agent such as Dimercaprol.

Gold metal was voted Allergen of the Year in 2001 by the American Contact Dermatitis Society. Gold
contact allergies affect mostly women.[68] Despite this, gold is a relatively non-potent contact allergen, in
comparison with metals like nickel.[69]

Price

Like other precious metals, gold is measured by troy weight and by grams. When it is alloyed with other
metals the term carat or karat is used to indicate the amount of gold present, with 24 carats being pure
gold and lower ratings proportionally less. The purity of a gold bar or coin can also be expressed as a
decimal figure ranging from 0 to 1, known as the millesimal fineness, such as 0.995 being very pure.

The price of gold is determined through trading in the gold and derivatives markets, but a procedure
known as the Gold Fixing in London, originating in September 1919, provides a daily benchmark price to
the industry. The afternoon fixing was introduced in 1968 to provide a price when US markets are open.

Historically gold coinage was widely used as currency; when paper money was introduced, it typically
was a receipt redeemable for gold coin or bullion. In a monetary system known as the gold standard, a
certain weight of gold was given the name of a unit of currency. For a long period, the United States
government set the value of the US dollar so that one troy ounce was equal to $20.67 ($664.56/kg), but in

348
1934 the dollar was devalued to $35.00 per troy ounce ($1125.27/kg). By 1961, it was becoming hard to
maintain this price, and a pool of US and European banks agreed to manipulate the market to prevent
further currency devaluation against increased gold demand.

Swiss-cast 1 kg gold bar

On March 17, 1968, economic circumstances caused the collapse of the gold pool, and a two-tiered
pricing scheme was established whereby gold was still used to settle international accounts at the old
$35.00 per troy ounce ($1.13/g) but the price of gold on the private market was allowed to fluctuate; this
two-tiered pricing system was abandoned in 1975 when the price of gold was left to find its free-market
level. Central banks still hold historical gold reserves as a store of value although the level has generally
been declining. The largest gold depository in the world is that of the U.S. Federal Reserve Bank in New
York, which holds about 3%[citation needed] of the gold ever mined, as does the similarly laden U.S. Bullion
Depository at Fort Knox.

In 2005 the World Gold Council estimated total global gold supply to be 3,859 tonnes and demand to be
3,754 tonnes, giving a surplus of 105 tonnes.[70]

Since 1968 the price of gold has ranged widely, from a high of $850/oz ($27,300/kg) on January 21,
1980, to a low of $252.90/oz ($8,131/kg) on June 21, 1999 (London Gold Fixing).[71] The period from
1999 to 2001 marked the "Brown Bottom" after a 20-year bear market.[72] Prices increased rapidly from
1991, but the 1980 high was not exceeded until January 3, 2008 when a new maximum of $865.35 per
troy ounce was set (a.m. London Gold Fixing).[73] Another record price was set on March 17, 2008 at
$1023.50/oz ($32,900/kg)(am. London Gold Fixing).[73] In the fall of 2009, gold markets experienced
renewed momentum upwards due to increased demand and a weakening US dollar. On December 2,
2009, Gold passed the important barrier of US$1200 per ounce to close at $1215. [74] Gold further rallied
hitting new highs in May 2010 after the European Union debt crisis prompted further purchase of gold as
a safe asset.[75][76]

Since April 2001 the gold price has more than tripled in value against the US dollar,[77] prompting
speculation that this long secular bear market has ended and a bull market has returned.[78]

Symbolism

349
Gold bars at the Emperor Casino in Macau

Gold has been highly valued in many societies throughout the ages. In keeping with this it has often had a
strongly positive symbolic meaning closely connected to the values held in the highest esteem in the
society in question. Gold may symbolize power, strength, wealth, warmth, happiness, love, hope,
optimism, intelligence, justice, balance, perfection, summer, harvest and the sun.

Great human achievements are frequently rewarded with gold, in the form of gold medals, golden
trophies and other decorations. Winners of athletic events and other graded competitions are usually
awarded a gold medal (e.g., the Olympic Games). Many awards such as the Nobel Prize are made from
gold as well. Other award statues and prizes are depicted in gold or are gold plated (such as the Academy
Awards, the Golden Globe Awards, the Emmy Awards, the Palme d'Or, and the British Academy Film
Awards).

Aristotle in his ethics used gold symbolism when referring to what is now commonly known as the
"golden mean". Similarly, gold is associated with perfect or divine principles, such as in the case of Phi,
which is sometimes called the "golden ratio".

Gold represents great value. Respected people are treated with the most valued rule, the "golden rule". A
company may give its most valued customers "gold cards" or make them "gold members". We value
moments of peace and therefore we say: "silence is golden". In Greek mythology there was the "golden
fleece".

Gold is further associated with the wisdom of aging and fruition. The fiftieth wedding anniversary is
golden. Our precious latter years are sometimes considered "golden years". The height of a civilization is
referred to as a "golden age".

In Christianity gold has sometimes been associated with the extremities of utmost evil and the greatest
sanctity. In the Book of Exodus, the Golden Calf is a symbol of idolatry. In the Book of Genesis,
Abraham was said to be rich in gold and silver, and Moses was instructed to cover the Mercy Seat of the
Ark of the Covenant with pure gold. In Christian art the halos of Christ, Mary and the Christian saints are
golden.

Medieval kings were inaugurated under the signs of sacred oil and a golden crown, the latter symbolizing
the eternal shining light of heaven and thus a Christian king's divinely inspired authority. Wedding rings
have long been made of gold. It is long lasting and unaffected by the passage of time and may aid in the
ring symbolism of eternal vows before God and/or the sun and moon and the perfection the marriage

350
signifies. In Orthodox Christianity, the wedded couple is adorned with a golden crown during the
ceremony, an amalgamation of symbolic rites.

In popular culture gold holds many connotations but is most generally connected to terms such as good or
great, such as in the phrases: "has a heart of gold", "that's golden!", "golden moment", "then you're
golden!" and "golden boy". Gold also still holds its place as a symbol of wealth and through that, in many
societies, success.

Silver

From Wikipedia, the free encyclopedia

Jump to: navigation, search

This article is about the chemical element. For the color, see Silver (color). For other uses, see Silver
(disambiguation).

palladium ← silver → cadmium

Cu

Ag

Au

47Ag

Periodic table

Appearance

lustrous white metal

351
Electrolytically refined silver

General properties

Name, symbol, number silver, Ag, 47

Pronunciation /ˈsɪlvər/

Element category transition metal

Group, period, block 11, 5, d

Standard atomic weight 107.8682g·mol−1

Electron configuration [Kr] 4d10 5s1

Electrons per shell 2, 8, 18, 18, 1 (Image)

Physical properties

Phase solid

Density (near r.t.) 10.49 g·cm−3

Liquid density at m.p. 9.320 g·cm−3

Melting point 1234.93 K961.78 ° ,C1763.2 ° ,F

Boiling point 2435 K2162 ° ,C3924 ° ,F

Heat of fusion 11.28 kJ·mol−1

352
Heat of vaporization 250.58 kJ·mol−1

Specific heat capacity (25 °C) 25.350 J·mol−1·K−1

Vapor pressure

P (Pa) 1 10 100 1k 10 k 100 k

at T (K) 1283 1413 1575 1782 2055 2433

Atomic properties

Oxidation states 1, 2, 3 (amphoteric oxide)

Electronegativity 1.93 (Pauling scale)

Ionization energies 1st: 731.0 kJ·mol−1

2nd: 2070 kJ·mol−1

3rd: 3361 kJ·mol−1

Atomic radius 144 pm

Covalent radius 145±5 pm

Van der Waals radius 172 pm

Miscellanea

Crystal structure face-centered cubic

Magnetic ordering diamagnetic

Electrical resistivity (20 °C) 15.87 nΩ·m

Thermal conductivity (300 K) 429 W·m−1·K−1

Thermal diffusivity (300 K) 174 mm²/s

353
Thermal expansion (25 °C) 18.9 µm·m−1·K−1

Young's modulus 83 GPa

Shear modulus 30 GPa

Bulk modulus 100 GPa

Poisson ratio 0.37

Mohs hardness 2.5

Vickers hardness 251 MPa

Brinell hardness 24.5 MPa

CAS registry number 7440-22-4

Most stable isotopes

Main article: Isotopes of silver

iso NA half-life DM DE (MeV) DP

ε - 105
Pd
105
Ag syn 41.2 d
0.344, 0.280,
γ -
0.644, 0.443

ε - 106
Pd
106m
Ag syn 8.28 d
0.511, 0.717,
γ -
1.045, 0.450

107 107
Ag 51.839% Ag is stable with 60 neutrons

ε - 108
Pd

108
108m IT 0.109 Ag
Ag syn 418 y
0.433, 0.614,
γ -
0.722

354
109 109
Ag 48.161% Ag is stable with 62 neutrons

β− 1.036, 0.694 111


Cd
111
Ag syn 7.45 d
γ 0.342 -

v·d·e

Silver ( /ˈsɪlvər/) is a metallic chemical element with the chemical symbol Ag (Latin: argentum, from
the Indo-European root *arg- for "grey" or "shining") and atomic number 47. A soft, white, lustrous
transition metal, it has the highest electrical conductivity of any element and the highest thermal
conductivity of any metal. The metal occurs naturally in its pure, free form (native silver), as an alloy
with gold and other metals, and in minerals such as argentite and chlorargyrite. Most silver is produced as
a by-product of copper, gold, lead, and zinc refining.

Silver has long been valued as a precious metal, and it is used to make ornaments, jewelry, high-value
tableware, utensils (hence the term silverware), and currency coins. Today, silver metal is also used in
electrical contacts and conductors, in mirrors and in catalysis of chemical reactions. Its compounds are
used in photographic film and dilute silver nitrate solutions and other silver compounds are used as
disinfectants and microbiocides. While many medical antimicrobial uses of silver have been supplanted
by antibiotics, further research into clinical potential continues.

Contents

[hide]

 1 Characteristics
 2 Isotopes
 3 Compounds
 4 Applications
o 4.1 Currency
o 4.2 Jewelry and silverware
o 4.3 Dentistry
o 4.4 Photography and electronics
o 4.5 Mirrors and optics
o 4.6 Other industrial and commercial applications
o 4.7 Medical
o 4.8 Clothing
 5 History
 6 Occurrence and extraction
 7 Price
 8 Human exposure and consumption
o 8.1 Monitoring exposure
o 8.2 Use in food
 9 See also
 10 References

355
 11 External links

[edit] Characteristics

Silver 1000 oz t (~31 kg) bullion bar

Silver is a very ductile and malleable (slightly harder than gold) monovalent coinage metal with a brilliant
white metallic luster that can take a high degree of polish. It has the highest electrical conductivity of all
metals, even higher than copper, but its greater cost has prevented it from being widely used in place of
copper for electrical purposes. Despite this, 13,540 tons were used in the electromagnets used for
enriching uranium during World War II (mainly because of the wartime shortage of copper).[1][2] Another
notable exception is in high-end audio cables.[3]

Among metals, pure silver has the highest thermal conductivity[4] (the non-metal diamond and superfluid
helium II are higher) and one of the highest optical reflectivity.[5] (Aluminium slightly outdoes silver in
parts of the visible spectrum, and silver is a poor reflector of ultraviolet light). Silver also has the lowest
contact resistance of any metal. Silver halides are photosensitive and are remarkable for their ability to
record a latent image that can later be developed chemically. Silver is stable in pure air and water, but
tarnishes when it is exposed to air or water containing ozone or hydrogen sulfide, the latter forming a
black layer of silver sulfide which can be cleaned off with dilute hydrochloric acid.[6] The most common
oxidation state of silver is +1 (for example, silver nitrate: AgNO3); in addition, +2 compounds (for
example, silver(II) fluoride: AgF2) and the less common +3 compounds (for example, potassium
tetrafluoroargentate: K[AgF4] ) are known.

[edit] Isotopes

Main article: Isotopes of silver

Naturally occurring silver is composed of two stable isotopes, 107Ag and 109Ag, with 107Ag being the most
abundant (51.839% natural abundance). Silver's isotopes are almost equal in abundance, something which
is rare in the periodic table. Silver's atomic weight is 107.8682(2) g/mol.[7][8] Twenty-eight radioisotopes
have been characterized, the most stable being 105Ag with a half-life of 41.29 days, 111Ag with a half-life
of 7.45 days, and 112Ag with a half-life of 3.13 hours. This element has numerous meta states, the most
stable being 108mAg (t1/2 = 418 years), 110mAg (t1/2 = 249.79 days) and 106mAg (t1/2 = 8.28 days). All of the

356
remaining radioactive isotopes have half-lives that are less than an hour, and the majority of these have
half-lives that are less than 3 minutes.

Isotopes of silver range in relative atomic mass from 93.943 (94Ag) to 126.936 (127Ag);[9] the primary
decay mode before the most abundant stable isotope, 107Ag, is electron capture and the primary mode
after is beta decay. The primary decay products before 107Ag are palladium (element 46) isotopes, and the
primary products after are cadmium (element 48) isotopes.

The palladium isotope 107Pd decays by beta emission to 107Ag with a half-life of 6.5 million years. Iron
meteorites are the only objects with a high-enough palladium-to-silver ratio to yield measurable variations
in 107Ag abundance. Radiogenic 107Ag was first discovered in the Santa Clara meteorite in 1978.[10] The
discoverers suggest that the coalescence and differentiation of iron-cored small planets may have
occurred 10 million years after a nucleosynthetic event. 107Pd–107Ag correlations observed in bodies that
have clearly been melted since the accretion of the solar system must reflect the presence of unstable
nuclides in the early solar system.[11]

[edit] Compounds

Silver metal dissolves readily in nitric acid (HNO3) to produce silver nitrate (AgNO3), a transparent
crystalline solid that is photosensitive and readily soluble in water. Silver nitrate is used as the starting
point for the synthesis of many other silver compounds, as an antiseptic, and as a yellow stain for glass in
stained glass. Silver metal does not react with sulfuric acid, which is used in jewelry-making to clean and
remove copper oxide firescale from silver articles after silver soldering or annealing. However, silver
reacts readily with sulfur or hydrogen sulfide H2S to produce silver sulfide, a dark-colored compound
familiar as the tarnish on silver coins and other objects. Silver sulfide also forms silver whiskers when
silver electrical contacts are used in an atmosphere rich in hydrogen sulfide.

4 Ag + O2 + 2 H2S → 2 Ag2S + 2 H2O

Cessna 210 equipped with a silver iodide generator for cloud seeding

Silver chloride (AgCl) is precipitated from solutions of silver nitrate in the presence of chloride ions, and
the other silver halides used in the manufacture of photographic emulsions are made in the same way
using bromide or iodide salts. Silver chloride is used in glass electrodes for pH testing and potentiometric
measurement, and as a transparent cement for glass. Silver iodide has been used in attempts to seed
clouds to produce rain.[6] Silver halides are highly insoluble in aqueous solutions and are used in
gravimetric analytical methods.

357
Silver oxide (Ag2O) can be produced when silver nitrate solutions are treated with a base; it is used as a
positive electrode (anode) in watch batteries. Silver carbonate (Ag2CO3) is precipitated when silver nitrate
is treated with sodium carbonate (Na2CO3).[12]

2 AgNO3 + 2 OH- → Ag2O + H2O + 2 NO3-

2 AgNO3 + Na2CO3 → Ag2CO3 + 2 NaNO3

Silver fulminate (AgONC), a powerful, touch-sensitive explosive used in percussion caps, is made by
reaction of silver metal with nitric acid in the presence of ethanol (C2H5OH). Another dangerously
explosive silver compound is silver azide (AgN3), formed by reaction of silver nitrate with sodium azide
(NaN3).[13]

Latent images formed in silver halide crystals are developed by treatment with alkaline solutions of
reducing agents such as hydroquinone, metol (4-(methylamino)phenol sulfate) or ascorbate which reduce
the exposed halide to silver metal. Alkaline solutions of silver nitrate can be reduced to silver metal by
reducing sugars such as glucose, and this reaction is used to silver glass mirrors and the interior of glass
Christmas ornaments. Silver halides are soluble in solutions of sodium thiosulfate (Na2S2O3) which is
used as a photographic fixer, to remove excess silver halide from photographic emulsions after image
development.[12]

Silver metal is attacked by strong oxidizers such as potassium permanganate (KMnO4) and potassium
dichromate (K2Cr2O7), and in the presence of potassium bromide (KBr), these compounds are used in
photography to bleach silver images, converting them to silver halides that can either be fixed with
thiosulfate or re-developed to intensify the original image. Silver forms cyanide complexes (silver
cyanide) that are soluble in water in the presence of an excess of cyanide ions. Silver cyanide solutions
are used in electroplating of silver.[12]

[edit] Applications

Many well known uses of silver involve its precious metal properties, including currency, decorative
items and mirrors. The contrast between the appearance of its bright white color in contrast with other
media makes it very useful to the visual arts. It has also long been used to confer high monetary value as
objects (such as silver coins and investment bars) or make objects symbolic of high social or political
rank.

[edit] Currency

Main articles: Silver coin and Silver standard

Silver, in the form of electrum (a gold-silver alloy), was coined to produce money in around 700 BC by
the Lydians. Later, silver was refined and coined in its pure form. Many nations used silver as the basic
unit of monetary value. In the modern world, silver bullion has the ISO currency code XAG. The name of
the United Kingdom monetary unit "pound" (£) reflects the fact that it originally represented the value of
one troy pound of sterling silver. In the 1800s, because of the large silver discoveries in the Americas and
the fear by European Central bankers of the possibility of more silver than they suspected, many nations,
like Great Britain and the United States, switched from a silver and gold standard to a gold standard of
monetary value. During the 20th century, a slow transition to fiat currency occurred. Today, the price of
both gold and silver is increasing in monetary value because of debasement of fiat currencies through
overprinting and the rise in value of commodities.[citation needed]

358
[edit] Jewelry and silverware

Silver plate with goddess Minerva from the Hildesheim Treasure, 1st century BC

Main articles: jewelry and silversmith

Jewelry and silverware are traditionally made from sterling silver (standard silver), an alloy of 92.5%
silver with 7.5% copper. In the US, only an alloy consisting of at least 90.0% fine silver can be marketed
as "silver" (thus frequently stamped 900). Sterling silver (stamped 925) is harder than pure silver, and has
a lower melting point (893 °C) than either pure silver or pure copper.[6] Britannia silver is an alternative
hallmark-quality standard containing 95.8% silver, often used to make silver tableware and wrought plate.
With the addition of germanium, the patented modified alloy Argentium Sterling Silver is formed, with
improved properties including resistance to firescale.

Sterling silver jewelry is often plated with a thin coat of .999 fine silver to give the item a shiny finish.
This process is called "flashing". Silver jewelry can also be plated with rhodium (for a bright, shiny look)
or gold.

Silver is a constituent of almost all colored carat gold alloys and carat gold solders, giving the alloys paler
color and greater hardness.[14] White 9 carat gold contains 62.5% silver and 37.5% gold, while 22 carat
gold contains up to 91.7 gold and 8.4% silver or copper or a mix of copper/silver. The more Copper
added, the more "orange" the gold becomes. Rose Gold (stamped 375 or 9K (can be stamped 9c) was very
popular in the UK in the late 19th Century.[14]

Historically the training and guild organization of goldsmiths included silversmiths as well, and the two
crafts remain largely overlapping. Unlike blacksmiths, silversmiths do not shape the metal while it is red-
hot but instead, work it at room temperature with gentle and carefully placed hammer blows. The essence
of silversmithing is to take a flat piece of metal and by means of different hammers, stakes and other
simple tools, to transform it into a useful object.[15]

While silversmiths specialize in, and principally work, silver, they also work with other metals such as
gold, copper, steel, and brass. They make jewelry, silverware, armor, vases, and other artistic items.
Because silver is such a malleable metal, silversmiths have a large range of choices with how they prefer
to work the metal. Historically, silversmiths are mostly referred to as goldsmiths, which was usually the
same guild. In the western Canadian silversmith tradition, guilds do not exist; however, mentoring
through colleagues becomes a method of professional learning within a community of craftspeople.[16]

359
Silver is much cheaper than gold, though still valuable, and so is very popular with jewelers who are just
starting out and cannot afford to make pieces in gold, or as a practicing material for goldsmith
apprentices. Silver has also become very fashionable, and is used frequently in more artistic jewelry
pieces.

Traditionally silversmiths mostly made "silverware" (cutlery, table flatware, bowls, candlesticks and
such). Only in more recent times has silversmithing become mainly work in jewelry, as much less solid
silver tableware is now handmade.

[edit] Dentistry

Silver can be alloyed with mercury, tin and other metals at room temperature to make amalgams that are
widely used for dental fillings. To make dental amalgam, a mixture of powdered silver and other metals is
mixed with mercury to make a stiff paste that can be adapted to the shape of a cavity. The dental amalgam
achieves initial hardness within minutes but sets hard in a few hours.

[edit] Photography and electronics

Photography used 30.98% of the silver consumed in 1998 in the form of silver nitrate and silver halides.
In 2001, 23.47% was used for photography, while 20.03% was used in jewelry, 38.51% for industrial
uses, and only 3.5% for coins and medals. The use of silver in photography has rapidly declined, due to
the lower demand for consumer color film from the advent of digital technology, since in 2007 of the
894.5 million ounces of silver in supply, just 128.3 million ounces (14.3%) were consumed by the
photographic sector, and the total amount of silver consumed in 2007 by the photographic sector
compared to 1998 is just 50%.[17]

Some electrical and electronic products use silver for its superior conductivity, even when tarnished. For
example, printed circuits and RFID antennas can be made using silver paints,[6][18] and computer
keyboards use silver electrical contacts. Silver cadmium oxide is used in high voltage contacts because it
can withstand arcing.

Some manufacturers produce audio connector cables, speaker wires, and power cables using silver
conductors, which have a 6% higher conductivity than ordinary copper ones, but cost more. While many
hi-fi enthusiasts believe that silver wires improve their sound quality, there is debate on the subject. [citation
needed]

During World War II the short supply of copper brought about the United States government's use of
silver from the Treasury vaults for conductors at Oak Ridge National Laboratory. (After the war ended
the silver was returned to the vaults.)[19]

Small devices such as hearing aids and watches commonly use Silver oxide batteries due to their long life
and high energy/weight ratio. Another usage is high-capacity silver-zinc and silver-cadmium batteries.

[edit] Mirrors and optics

Mirrors which need superior reflectivity for visible light are made with silver as the reflecting material in
a process called silvering, though common mirrors are backed with aluminium. Using a process called
sputtering, silver (and sometimes gold) can be applied to glass at various thicknesses, allowing different
amounts of light to penetrate. Silver is usually reserved for coatings of specialized optics, and the
silvering most often seen in architectural glass and tinted windows on vehicles is produced by sputtered

360
aluminium, which is cheaper and less susceptible to tarnishing and corrosion.[20] Silver is the reflective
coating of choice for solar reflectors.[21]

[edit] Other industrial and commercial applications

Yanagisawa A9932J alto saxophone: has a solid silver bell and neck with solid phosphor bronze body.
The bell, neck and key-cups are extensively engraved. Manufactured in 2008

Silver and silver alloys are used in the construction of high quality musical wind instruments of many
types.[22] Flutes, in particular, are commonly constructed of silver alloy or silver plated, both for
appearance and for the frictional surface properties of silver.[23]

Silver's catalytic properties make it ideal for use as a catalyst in oxidation reactions, for example, the
production of formaldehyde from methanol and air by means of silver screens or crystallites containing a
minimum 99.95 weight-percent silver. Silver (upon some suitable support) is probably the only catalyst
available today to convert ethylene to ethylene oxide (later hydrolyzed to ethylene glycol, used for
making polyesters)— an important industrial reaction. Because silver readily absorbs free neutrons, it is
commonly used to make control rods that regulate the fission chain reaction in pressurized water nuclear
reactors, generally in the form of an alloy containing 80% silver, 15% indium, and 5% cadmium. Silver is
used to make solder and brazing alloys, and as a thin layer on bearing surfaces can provide a significant
increase in galling resistance and reduce wear under heavy load, particularly against steel.

[edit] Medical

Main article: Medical uses of silver

Silver ions and silver compounds show a toxic effect on some bacteria, viruses, algae and fungi, typical
for heavy metals like lead or mercury, but without the high toxicity to humans that are normally
associated with these other metals. Its germicidal effects kill many microbial organisms in vitro, but
testing and standardization of silver products is difficult.[24]

Hippocrates, the "father of medicine",[25] wrote that silver had beneficial healing and anti-disease
properties, and the Phoenicians used to store water, wine, and vinegar in silver bottles to prevent spoiling.
In the early 1900s people[where?] would put silver coins in milk bottles to prolong the milk's freshness. [26]
Its germicidal effects increased its value in utensils and as jewellery. The exact process of silver's
germicidal effect is still not entirely understood, although theories exist. One of these is the oligodynamic
effect, which explains the effect on microorganisms but would not explain antiviral effects.

361
Silver is widely used in topical gels and impregnated into bandages because of its wide-spectrum
antimicrobial activity. The anti-microbial properties of silver stem from the chemical properties of its
ionized form, Ag+. This ion forms strong molecular bonds with other substances used by bacteria to
respire, such as molecules containing sulfur, nitrogen, and oxygen.[27] When the Ag+ ion forms a complex
with these molecules, they are rendered unusable by the bacteria, depriving them of necessary compounds
and eventually leading to the bacteria's death.

Silver compounds were used to prevent infection in World War I before the advent of antibiotics. Silver
nitrate solution use continued, then was largely replaced by silver sulfadiazine cream (SSD cream),[28]
which generally became the "standard of care" for the antibacterial and antibiotic treatment of serious
burns until the late 1990s.[29] Now, other options, such as silver-coated dressings (activated silver
dressings), are used in addition to SSD cream. However, the evidence for the effectiveness of such silver-
treated dressings is mixed and although the evidence is promising it is marred by the poor quality of the
trials used to assess these products. Consequently a systematic review by the Cochrane Collaboration
(published in 2008) found insufficient evidence to recommend the use of silver-treated dressings to treat
infected wounds.[30]

There has been renewed interest in silver as a broad-spectrum antimicrobial agent. One application has
silver being used with alginate, a naturally occurring biopolymer derived from seaweed, in a range of
products designed to prevent infections as part of wound management procedures, particularly applicable
to burn victims.[31] In 2007, a company introduced a glass product that they claimed had antibacterial
properties by coating the glass with a thin layer of silver.[32] In addition, the U.S. Food and Drug
Administration (FDA) has recently approved an endotracheal breathing tube with a fine coat of silver for
use in mechanical ventilation, after studies found it reduced the risk of ventilator-associated
pneumonia.[33]

Another example uses the known enhanced antibacterial action of silver by applying an electric field. It
was found recently that the antibacterial action of silver electrodes is greatly improved if the electrodes
are covered with silver nanorods.[34]

Silver is commonly used in catheters. Silver alloy catheters are more effective than standard catheters for
reducing bacteriuria in adults in hospital having short term catheterisation. This meta-analysis clarifies
discrepant results among trials of silver-coated urinary catheters by revealing that silver alloy catheters
are significantly more effective in preventing urinary tract infections than are silver oxide catheters.
Though silver alloy urinary catheters cost about $6 more than standard urinary catheters, they may be
worth the extra cost since catheter-related infection is a common cause of nosocomial infection and
bacteremia.[35]

Various silver compounds, devices to make homeopathic solutions and colloidal silver suspensions are
sold as remedies for numerous conditions. Although most colloidal silver preparations are harmless, there
are cases where excessive consumption led to argyria over a period of months or years.[36] High
consumption doses of colloidal silver can result in coma, pleural edema, and hemolysis.[37]

[edit] Clothing

Silver inhibits the growth of bacteria and fungi and thus is added to clothing, such as socks, to reduce
odor and the risk of bacterial and fungal infection. Silver is incorporated into clothing or shoes either by
integrating silver nanoparticles into the polymer from which yarns are made or by coating yarns with
silver.[38][39] The loss of silver during washing varies between textile technologies, and the resultant effect
on the environment is not yet fully known.[40][41]

362
[edit] History

The crescent Moon has been used since ancient times to represent silver.

Silver has been used for thousands of years for ornaments and utensils, for trade, and as the basis for
many monetary systems. Its value as a precious metal was long considered second only to gold. The word
"silver" appears in Anglo-Saxon in various spellings such as seolfor and siolfor. A similar form is seen
throughout the Germanic languages (compare Old High German silabar and silbir). The chemical symbol
Ag is from the Latin for "silver", argentum (compare Greek άργυρος, árgyros), from the Indo-European
root *arg- meaning "white" or "shining". Silver has been known since ancient times. Mentioned in the
book of Genesis, slag heaps found in Asia Minor and on the islands of the Aegean Sea indicate that silver
was being separated from lead as early as the 4th millennium BC using surface mining.[6]

The stability of the Roman currency relied to a high degree on the supply of silver bullion which Roman
miners produced on a scale unparalleled before the discovery of the New World.[42][43] Reaching a peak
production of 200 t per year, an estimated silver stock of 10,000 t circulated in the Roman economy in the
mid-2nd century AD, five to ten times larger than the combined amount of silver available to medieval
Europe and the Caliphate around 800 AD.[42][43]

Recorded use of silver to prevent infection dates to ancient Greece and Rome, it was rediscovered in the
Middle Ages, where it was used for several purposes, such as to disinfect water and food during storage,
and also for the treatment of burns and wounds as wound dressing. In the 19th century, sailors on long
ocean voyages would put silver coins in barrels of water and wine to keep the liquid pure. Pioneers in
America used the same idea as they made their journey from coast to coast. Silver solutions were
approved in the 1920s by the US Food and Drug Administration for use as antibacterial agents.

In the Gospels, Jesus' disciple Judas Iscariot is infamous for having taken a bribe of thirty coins of silver
from religious leaders in Jerusalem to turn Jesus Christ over to the Romans.

In certain circumstances, Islam permits Muslim men to wear silver jewelry. Muhammad himself wore a
silver signet ring[citation needed].

[edit] Occurrence and extraction

Main article: Silver mining

363
Native silver

Time trend of silver production

Silver is found in native form, as an alloy with gold (see also: electrum), and in ores containing sulfur,
arsenic, antimony or chlorine. Ores include argentite (Ag2S), chlorargyrite (AgCl) which includes horn
silver , and pyrargyrite (Ag3SbS3). The principal sources of silver are the ores of copper, copper-nickel,
lead, and lead-zinc obtained from Peru, Mexico, China, Australia, Chile, Poland and Serbia.[6] Peru and
Mexico have been mining silver since 1546 and are still major world producers. Top silver-producing
mines are Proaño / Fresnillo (Mexico), Cannington (Queensland, Australia), Dukat (Russia),
Uchucchacua (Peru) and Greens Creek mine (Alaska).[44]

The metal is primarily produced through electrolytic copper refining, gold, nickel and zinc refining, and
by application of the Parkes process on lead metal obtained from lead ores that contain small amounts of
silver. Commercial-grade fine silver is at least 99.9% pure, and purities greater than 99.999% are
available. In 2007, Peru was the world's top producer of silver, closely followed by Mexico, according to
the British Geological Survey.[clarification needed]

[edit] Price

Main articles: Silver as an investment and Silver standard

364
Silver output in 2005

At a December 2010 price of about $28 USD per troy ounce,[45] silver is about 1/50th the price of gold.
The ratio has varied from 1/15 to 1/100 in the past 100 years.[46]

In 1980, the silver price rose to an all-time high of US$49.45 per troy ounce (T.O.) due to market
manipulation of Nelson Bunker Hunt and Herbert Hunt.[citation needed] Some time after Silver Thursday the
price was back to $10 per troy ounce.[47] By December 2001 the price had dropped to US$4.15/T.O., and
in May 2006 it had risen back as high as US$15.21/T.O. In March 2008, silver reached US$21.34/T.O. [48]
In December 2010, silver reached as high as US$30.

The price of silver is important in Judaic Law. The lowest fiscal amount that a Jewish court, or Beth Din,
can convene to adjudicate a case over is a shova pruta (value of a Babylonian pruta coin). This is fixed at
1/8 of a gram of pure, unrefined silver, at market price.

[edit] Human exposure and consumption

Silver plays no known natural biological role in humans, and possible health effects of silver are a
disputed subject. Silver itself is not toxic but most silver salts are, and some may be carcinogenic.[dubious –
discuss]
Silver and compounds containing silver (like colloidal silver) can be absorbed into the circulatory
system and become deposited in various body tissues leading to a condition called argyria which results
in a blue-grayish pigmentation of the skin, eyes, and mucous membranes. Although this condition does
not otherwise harm a person's health, it is disfiguring and usually permanent. Argyria is rare, and mild
forms are sometimes mistaken for cyanosis.[6]

[edit] Monitoring exposure

Overexposure to silver can occur in workers in the metallurgical industry, persons taking silver-
containing dietary supplements, patients who have received silver sulfadiazine treatment and individuals
who accidentally or intentionally ingest silver salts. Silver concentrations in whole blood, plasma, serum
or urine may be measured to monitor for safety in exposed workers, to confirm the diagnosis in potential
poisoning victims or to assist in the forensic investigation in a case of fatal overdosage.[49]

[edit] Use in food

Silver is used in food coloring, it has the E174 designation, and is approved in the European Union. The
amount of silver in the coating of dragées or in cookie decoration is minuscule. The safety of silver for
use in food is disputed. Traditional Indian dishes sometimes include the use of decorative silver foil

365
known as vark, and in various cultures silver dragée are used to decorate cakes, cookies, and other dessert
items. The use of silver as a food additive is not approved in the United States and Australia.[citation needed]

Iron

From Wikipedia, the free encyclopedia

Jump to: navigation, search

"Fe" redirects here. For other uses, see Fe (disambiguation).

This article is about the chemical element. For other uses, see Iron (disambiguation).

manganese ← iron → cobalt

-

Fe

Ru

26Fe

Periodic table

Appearance

lustrous metallic with a grayish tinge

366
Spectral lines of Iron

General properties

Name, symbol, number iron, Fe, 26

Pronunciation US /aɪ.ərn/; UK /ˈaɪərn/

Element category transition metal

Group, period, block 8, 4, d

Standard atomic weight 55.845g·mol−1

Electron configuration [Ar] 3d6 4s2

Electrons per shell 2, 8, 14, 2 (Image)

Physical properties

Phase solid

Density (near r.t.) 7.874 g·cm−3

Liquid density at m.p. 6.98 g·cm−3

Melting point 1811 K1538 ° ,C2800 ° ,F

Boiling point 3134 K2862 ° ,C5182 ° ,F

Heat of fusion 13.81 kJ·mol−1

Heat of vaporization 340 kJ·mol−1

Specific heat capacity (25 °C) 25.10 J·mol−1·K−1

Vapor pressure

P (Pa) 1 10 100 1k 10 k 100 k

367
at T (K) 1728 1890 2091 2346 2679 3132

Atomic properties

Electronegativity 1.83 (Pauling scale)

Ionization energies 1st: 762.5 kJ·mol−1


(more)
2nd: 1561.9 kJ·mol−1

3rd: 2957 kJ·mol−1

Atomic radius 126 pm

Covalent radius 132±3 (low spin), 152±6 (high spin) pm

Miscellanea

Crystal structure body-centered cubic

Magnetic ordering ferromagnetic

1043 K

Electrical resistivity (20 °C) 96.1 nΩ·m

Thermal conductivity (300 K) 80.4 W·m−1·K−1

Thermal expansion (25 °C) 11.8 µm·m−1·K−1

(r.t.) (electrolytic)
Speed of sound (thin rod)
5120 m·s−1

Young's modulus 211 GPa

Shear modulus 82 GPa

Bulk modulus 170 GPa

Poisson ratio 0.29

368
Mohs hardness 4

Vickers hardness 608 MPa

Brinell hardness 490 MPa

CAS registry number 7439-89-6

Most stable isotopes

Main article: Isotopes of iron

iso NA half-life DM DE (MeV) DP


54
Fe 5.8% >3.1×1022y 2ε capture ? 54
Cr
55
Fe syn 2.73 y ε capture 0.231 55
Mn
56 56
Fe 91.72% Fe is stable with 30 neutrons

57 57
Fe 2.2% Fe is stable with 31 neutrons
58 58
Fe 0.28% Fe is stable with 32 neutrons

59
Fe syn 44.503 d β− 1.565 59
Co
60
Fe syn 2.6×106 y β− 3.978 60
Co

v · d · e

Iron ( /ˈaɪ.ərn/ or /ˈaɪərn/) is a chemical element with the symbol Fe (Latin: ferrum) and atomic number
26. It is a metal in the first transition series. It is the most common element in the whole planet Earth,
forming much of Earth's outer and inner core, and it is the fourth most common element in the Earth's
crust. It is produced in abundance as a result of fusion in high-mass stars, where the production of nickel-
56 (which decays to iron) is the last nuclear fusion reaction that is exothermic, becoming the last element
to be produced before collapse of a supernova leads to events that scatter the precursor radionuclides of
iron into space.

Like other Group 8 elements, iron exists in a wide range of oxidation states, −2 to + 6, although +2 and
+3 are the most common. Elemental iron occurs in meteoroids and other low oxygen environments, but is
reactive to oxygen and water. Fresh iron surfaces appear lustrous silvery-gray, but oxidize in normal air to
give iron oxides, also known as rust. Unlike many other metals which form passivating oxide layers, iron
oxides occupy more volume than iron metal, and thus iron oxides flake off and expose fresh surfaces for
corrosion.

369
Iron metal has been used since ancient times, though lower-melting copper alloys were used first in
history. Pure iron is soft (softer than aluminium), but is unobtainable by smelting. The material is
significantly hardenened and strengthened by impurities from the smelting process, such as carbon. A
certain proportion of carbon (between 0.2% and 2.1%) produces steel, which may be up to 1000 times
harder than pure iron. Crude iron metal is produced in blast furnaces, where ore is reduced by coke to cast
iron. Further refinement with oxygen reduces the carbon content to make steel. Steels and low carbon iron
alloys with other metals (alloy steels) are by far the most common metals in industrial use, due to their
great range of desirable properties.

Iron chemical compounds, which include ferrous and ferric compounds, have many uses. Iron oxide
mixed with aluminium powder can be ignited to create a thermite reaction, used in welding and purifying
ores. It forms binary compounds with the halogens and the chalcogens. Among its organometallic
compounds, ferrocene was the first sandwich compound discovered.

Iron plays an important role in biology, forming complexes with molecular oxygen in hemoglobin and
myoglobin; these two compounds are common oxygen transport proteins in vertebrates. Iron is also the
metal used at the active site of many important redox enzymes dealing with cellular respiration and
oxidation and reduction in plants and animals.

Contents

[hide]

 1 Characteristics
o 1.1 Mechanical properties
o 1.2 Allotropes
o 1.3 Isotopes
o 1.4 Nucleosynthesis
o 1.5 Occurrence
 1.5.1 Planetary occurrence
 2 Chemistry and compounds
o 2.1 Binary compounds
o 2.2 Coordination and organometallic compounds
 3 History
o 3.1 Wrought iron
o 3.2 Cast iron
o 3.3 Steel
o 3.4 Recent discoveries
 4 Industrial production
o 4.1 Blast furnace
o 4.2 Direct iron reduction
o 4.3 Further processes
 5 Applications
o 5.1 Metallurgical
o 5.2 Of compounds
o 5.3 Uptake and storage
o 5.4 Biological role
o 5.5 Bioinorganic compounds
o 5.6 Health and diet
o 5.7 Regulation of uptake

370
 6 Precautions
 7 See also
 8 References
 9 Books
 10 External links

Characteristics

Mechanical properties

Characteristic values of tensile strength (TS) and Brinell hardness (BH) of different forms of iron.[1][2]

TS BH
Material
(MPa) (Brinell)

Iron whiskers 11000

Ausformed (hardened) steel 2930 850–1200

Martensitic steel 2070 600

Bainitic steel 1380 400

Pearlitic steel 1200 350

Cold-worked iron 690 200

Small-grain iron 340 100

Carbon-containing iron 140 40

Pure, single-crystal iron 10 3

Mechanical properties of iron and its alloys are evaluated using a variety of tests, such as the Brinell test,
Rockwell test, or tensile strength tests, among others; the results on iron are so consistent that iron is often
used to calibrate measurements or to relate the results of one test to another. [2][3] Those measurements
reveal that mechanical properties of iron crucially depend on purity: Purest research-purpose single
crystals of iron are softer than aluminium. Addition of only 10 parts per million of carbon doubles their
strength.[1] The hardness increases rapidly with carbon content up to 0.2% and saturates at ~0.6%. [4] The
purest industrially produced iron (about 99.99% purity) has a hardness of 20–30 Brinell.[5]

Allotropes

Main article: Allotropes of iron

371
Iron represents an example of allotropy in a metal. There are three allotropic forms of iron, known as α, γ
and δ.

Phase diagram of pure iron

As molten iron cools down it crystallizes at 1538 °C into its δ allotrope, which has a body-centered cubic
(bcc) crystal structure. As it cools further its crystal structure changes to face-centered cubic (fcc) at
1394 °C, when it is known as γ-iron, or austenite. At 912 °C the crystal structure again becomes bcc as α-
iron, or ferrite, is formed, and at 770 °C (the Curie point, Tc) iron becomes magnetic. As the iron passes
through the Curie temperature there is no change in crystalline structure, but there is a change in "domain
structure", where each domain contains iron atoms with a particular electronic spin. In unmagnetized iron,
all the electronic spins of the atoms within one domain are in the same direction; the neighboring domains
point in various directions and thus cancel out. In magnetized iron, the electronic spins of all the domains
are aligned, so that the magnetic effects of neighboring domains reinforce each other. Although each
domain contains billions of atoms, they are very small, about 10 micrometres across.[6]

Iron is of greatest importance when mixed with certain other metals and with carbon to form steels. There
are many types of steels, all with different properties, and an understanding of the properties of the
allotropes of iron is key to the manufacture of good quality steels.

Alpha iron, also known as ferrite, is the most stable form of iron at normal temperatures. It is a fairly soft
metal that can dissolve only a small concentration of carbon (no more than 0.021% by mass at 910 °C).[7]

Above 912 °C and up to 1400 °C α-iron undergoes a phase transition from bcc to the fcc configuration of
γ-iron, also called austenite. This is similarly soft and metallic but can dissolve considerably more carbon
(as much as 2.04% by mass at 1146 °C). This form of iron is used in the type of stainless steel used for
making cutlery, and hospital and food-service equipment.[6]

Isotopes

Main article: Isotopes of iron

Naturally occurring iron consists of four stable isotopes: 5.845% of 54Fe, 91.754% of 56Fe, 2.119% of 57Fe
and 0.282% of 58Fe. The nuclide 54Fe is predicted to undergo double beta decay, but this process had

372
never been observed experimentally for these nuclei, and only the lower limit on the half-life was
established: t1/2>3.1×1022 years. 60Fe is an extinct radionuclide of long half-life (2.6 million years).[8]

Much of the past work on measuring the isotopic composition of Fe has focused on determining 60Fe
variations due to processes accompanying nucleosynthesis (i.e., meteorite studies) and ore formation. In
the last decade however, advances in mass spectrometry technology have allowed the detection and
quantification of minute, naturally occurring variations in the ratios of the stable isotopes of iron. Much of
this work has been driven by the Earth and planetary science communities, although applications to
biological and industrial systems are beginning to emerge.[9]

The most abundant iron isotope 56Fe is of particular interest to nuclear scientists as it represents the most
stable nuclide possible. It is impossible to perform fission or fusion on 56Fe and still liberate energy. Since
56
Ni is easily produced from lighter nuclei in the alpha process in nuclear reactions in supernovae (see
silicon burning process), nickel-56 (14 alpha particles) is the endpoint of fusion chains inside extremely
massive stars, since addition of another alpha particle would result in zinc-60, which requires a great deal
more energy. This nickel-56, which has a half-life of about 6 days, is therefore made in quantity in these
stars, but soon decays by two successive positron emissions within supernova decay products in the
supernova remnant gas cloud, to first radioactive cobalt-56, and then stable iron-56. This last nuclide is
therefore common in the universe, relative to other stable metals of approximately the same atomic
weight.

In phases of the meteorites Semarkona and Chervony Kut a correlation between the concentration of 60Ni,
the daughter product of 60Fe, and the abundance of the stable iron isotopes could be found which is
evidence for the existence of 60Fe at the time of formation of the solar system. Possibly the energy
released by the decay of 60Fe contributed, together with the energy released by decay of the radionuclide
26
Al, to the remelting and differentiation of asteroids after their formation 4.6 billion years ago[citation needed].
The abundance of 60Ni present in extraterrestrial material may also provide further insight into the origin
of the solar system and its early history. Of the stable isotopes, only 57Fe has a nuclear spin (−1/2).

Nuclei of iron atoms have some of the highest binding energies per nucleon, surpassed only by the nickel
isotope 62Ni. This is formed by nuclear fusion in stars. Although a further tiny energy gain could be
extracted by synthesizing 62Ni, conditions in stars are unsuitable for this process to be favored. Elemental
distribution on Earth greatly favors iron over nickel, and also presumably in supernova element
production.[10]

Iron-56 is the heaviest stable isotope produced by the alpha process in stellar nucleosynthesis; elements
heavier than iron and nickel require a supernova for their formation. Iron is the most abundant element in
the core of red giants, and is the most abundant metal in iron meteorites and in the dense metal cores of
planets such as Earth.

Nucleosynthesis

Iron is created by extremely large, extremely hot (over 2.5 billion kelvin) stars, through a process called
the silicon burning process. It is the heaviest stable element to be produced in this manner. The process
starts with the second largest stable nucleus created by silicon burning: calcium. One stable nucleus of
calcium fuses with one helium nucleus, creating unstable titanium. Before the titanium decays, it can fuse
with another helium nucleus, creating unstable chromium. Before the chromium decays, it can fuse with
another helium nucleus, creating unstable iron. Before the iron decays, it can fuse with another helium
nucleus, creating unstable nickel-56. Any further fusion of nickel-56 consumes energy instead of
producing energy, so after the production of nickel-56, the star does not produce the energy necessary to

373
keep the core from collapsing. Eventually, the nickel-56 decays to unstable cobalt-56 which, in turn
decays to stable iron-56 When the core of the star collapses, it creates a Supernova. Supernovas also
create additional forms of stable iron via the r-process.

Occurrence

See also Category: Iron minerals

Planetary occurrence

Iron meteorites of similar composition of Earth's inner and outer core

Iron is the sixth most abundant element in the Universe, formed as the final step of nucleosynthesis, by
silicon fusing in massive stars. Metallic iron is rarely found on the surface of the earth because it tends to
oxidize, but its oxides are pervasive and represent the primary ores. While it makes up about 5% of the
Earth's crust, both the Earth's inner and outer core are believed to consist largely of an iron-nickel alloy
constituting 35% of the mass of the Earth as a whole. Iron is consequently the most abundant element on
Earth, but only the fourth most abundant element in the Earth's crust.[11][12] Most of the iron in the crust is
found combined with oxygen as iron oxide minerals such as hematite and magnetite. Large deposits of
iron are found in banded iron formations. These geological formations are a type of rock consisting of
repeated thin layers of iron oxides, either magnetite (Fe3O4) or hematite (Fe2O3), alternating with bands of
iron-poor shale and chert. The banded iron formations are common in the time between 3,700 million
years ago and 1,800 million years ago[13][14]

About 1 in 20 meteorites consist of the unique iron-nickel minerals taenite (35–80% iron) and kamacite
(90–95% iron). Although rare, iron meteorites are the main form of natural metallic iron on the Earth's
surface.[15] It was proven by Mössbauer spectroscopy that the red color of the surface of Mars is derived
from an iron oxide-rich regolith.[16]

Chemistry and compounds

See also Category: Iron compounds

Oxidation
Representative compound
state

−2 Disodium tetracarbonylferrate (Collman's reagent)

374
−1

0 Iron pentacarbonyl

1 Cyclopentadienyliron dicarbonyl dimer ("Fp2")

2 Ferrous sulfate, ferrocene

3 Ferric chloride, ferrocenium tetrafluoroborate

4 Barium ferrate(IV)

6 Potassium ferrate

Iron forms compounds mainly in the +2 and +3 oxidation states. Traditionally, iron(II) compounds are
called ferrous, and iron(III) compounds ferric. Iron also occurs in higher oxidation states, an example
being the purple potassium ferrate (K2FeO4) which contains iron in its +6 oxidation state. Iron(IV) is a
common intermediate in many in biochemical oxidation reactions.[17][18] Numerous organometallic
compounds contain formal oxidation states of +1, 0, −1, or even −2. The oxidation states and other
bonding properties are often assessed using the technique of Mössbauer spectroscopy.[19] There are also
many mixed valence compounds that contain both iron(II) and iron(III) centers, such as magnetite and
Prussian blue (Fe4(Fe[CN]6)3).[18] The latter is used as the traditional "blue" in blueprints.[20]

Hydrated iron(III) chloride, also known as ferric chloride

The iron compounds produced on the largest scale in industry are iron(II) sulfate (FeSO4·7H2O) and
iron(III) chloride (FeCl3). The former is one of the most readily available sources of iron(II), but is less
stable to aerial oxidation than Mohr's salt ((NH4)2Fe(SO4)2·6H2O). Iron(II) compounds tend to be
oxidized to iron(III) compounds in the air.[18]

375
Unlike many other metals, iron does not form amalgams with mercury. As a result, mercury is traded in
standardized 76 pound flasks (34 kg) made of iron.[21]

Binary compounds

Iron reacts with oxygen in the air to form various oxide and hydroxide compounds; the most common are
iron(II,III) oxide (Fe3O4), and iron(III) oxide (Fe2O3). Iron(II) oxide also exists, though it is unstable at
room temperature. These oxides are the principal ores for the production of iron (see bloomery and blast
furnace). They are also used in the production of ferrites, useful magnetic storage media in computers,
and pigments. The best known sulfide is iron pyrite (FeS2), also known as fool's gold owing to its golden
luster.[18]

The binary ferrous and ferric halides are well known, with the exception of ferric iodide. The ferrous
halides typically arise from treating iron metal with the corresponding binary halogen acid to give the
corresponding hydrated salts.[18]

Fe + 2 HX → FeX2 + H2

Iron reacts with fluorine, chlorine, and bromine to give the corresponding ferric halides, ferric chloride
being the most common:

2 Fe + 3 X2 → 2 FeX3 (X = F, Cl, Br)

Coordination and organometallic compounds

See also: organoiron chemistry

Prussian blue

Several cyanide complexes are known. The most famous example is Prussian blue, (Fe4(Fe[CN]6)3).
Potassium ferricyanide and potassium ferrocyanide are also known; the formation of Prussian blue upon
reaction with iron(II) and iron(III) respectively forms the basis of a "wet" chemical test.[18] Prussian blue
is also used as an antidote for thallium and radioactive caesium poisoning.[22][23] Prussian blue can be used
in laundry bluing to correct the yellowish tint left by ferrous salts in water.[24]

376
Ferrocene

Several carbonyl compounds of iron are known. The premier iron(0) compound is iron pentacarbonyl,
Fe(CO)5, which is used to produce carbonyl iron powder, a highly reactive form of metallic iron.
Thermolysis of iron pentacarbonyl gives the trinuclear cluster, triiron dodecacarbonyl. Collman's reagent,
disodium tetracarbonylferrate, is a useful reagent for organic chemistry; it contains iron in the −2
oxidation state. Cyclopentadienyliron dicarbonyl dimer contains iron in the rare +1 oxidation state.[25]

Ferrocene is an extremely stable complex. The first sandwich compound, it contains an iron(II) center
with two cyclopentadienyl ligands bonded through all ten carbon atoms. This arrangement was a shocking
novelty when it was first discovered,[26] but the discovery of ferrocene has led to a new branch of
organometallic chemistry. Ferrocene itself can be used as the backbone of a ligand, e.g. dppf. Ferrocene
can itself be oxidized to the ferrocenium cation (Fc+); the ferrocene/ferrocenium couple is often used as a
reference in electrochemistry.[27]

History

Main article: History of ferrous metallurgy

Wrought iron

The symbol for Mars has been used since antiquity to represent iron.

377
The Delhi iron pillar is an example of the iron extraction and processing methodologies of India. The iron
pillar at Delhi has withstood corrosion for the last 1600 years.

Iron objects of great age are much rarer than objects made of gold or silver due to the ease of corrosion of
iron.[28] Beads made of meteoric iron in 3500 B.C. or earlier were found in Gerzah, Egypt by G. A.
Wainwright.[29] The beads contain 7.5% nickel, which is a signature of meteoric origin since iron found in
the Earth's crust has very little to no nickel content. Meteoric iron was highly regarded due to its origin in
the heavens and was often used to forge weapons and tools or whole specimens placed in churches. [29]
Items that were likely made of iron by Egyptians date from 2500 to 3000 BC.[28] Iron had a distinct
advantage over bronze in warfare implements. It was much harder and more durable than bronze,
although susceptible to rust.

The first iron production started in the Middle Bronze Age but it took several centuries before iron
displaced bronze. Samples of smelted iron from Asmar, Mesopotamia and Tall Chagar Bazaar in northern
Syria were made sometime between 2700 and 3000 BC.[30] The Hittites appear to be the first to
understand the production of iron from its ores and regard it highly in their society. [24] They began to
smelt iron between 1500 and 1200 BC and the practice spread to the rest of the Near East after their
empire fell in 1180 BC.[30] The subsequent period is called the Iron Age. Iron smelting, and thus the Iron
Age, reached Europe two hundred years later and arrived in Zimbabwe, Africa by the 8th century.[30]

Artifacts from smelted iron occur in India from 1800 to 1200 BC,[31] and in the Levant from about 1500
BC (suggesting smelting in Anatolia or the Caucasus).[32][33]

The Book of Genesis, fourth chapter, verse 22 contains the first mention of iron in the Old Testament of
the Bible; "Tubal-cain, an instructor of every artificer in brass and iron."[28] Other verses allude to iron
mining (Job 28:2), iron used as a stylus (Job 19:24), furnace (Deuteronomy 4:20), chariots (Joshua
17:16), nails (I Chron. 22:3), saws and axes (II Sam. 12:31), and cooking utensils (Ezekiel 4:3).[34] The
metal is also mentioned in the New Testament, for example in Acts chapter 12 verse 10, "[Peter passed
through] the iron gate that leadeth unto the city" of Antioch.[35] The Quran referred to Iron 1400 years
ago.

Iron working was introduced to Greece in the late 11th century BC.[36] The spread of ironworking in
Central and Western Europe is associated with Celtic expansion. According to Pliny the Elder, iron use
was common in the Roman era.[29] The annual iron output of the Roman Empire is estimated at 84,750
t,[37] while the similarly populous Han China produced around 5,000 t.[38]

During the Industrial Revolution in Britain, Henry Cort began refining iron from pig iron to wrought iron
(or bar iron) using innovative production systems. In 1783 he patented the puddling process for refining
iron ore. It was later improved by others including Joseph Hall.

Cast iron

Cast iron was first produced in China about 550 BC,[39] but was hardly in Europe until the medieval
period.[40][41] During the medieval period, means were found in Europe of producing wrought iron from
cast iron (in this context known as pig iron) using finery forges. For all these processes, charcoal was
required as fuel.

378
Coalbrookdale by Night, 1801. Blast furnaces light the iron making town of Coalbrookdale.

Medieval blast furnaces were about 10 feet (3.0 m) tall and made of fireproof brick; forced air was
usually provided by hand-operated bellows.[41] Modern blast furnaces have grown much bigger.

In 1709, Abraham Darby I established a coke-fired blast furnace to produce cast iron. The ensuing
availability of inexpensive iron was one of the factors leading to the Industrial Revolution. Toward the
end of the 18th century, cast iron began to replace wrought iron for certain purposes, because it was
cheaper. Carbon content in iron wasn't implicated as the reason for the differences in properties of
wrought iron, cast iron and steel until the 18th century.[30]

Since iron was becoming cheaper and more plentiful, it also became a major structural material following
the building of the innovative first iron bridge in 1778.

Steel

See also: Steelmaking

Steel (with smaller carbon content than pig iron but more than wrought iron) was first produced in
antiquity by using a bloomery. Blacksmiths in Luristan in western Iran were making good steel by
1000 BC.[30] Then improved versions, Wootz steel by India and Damascus steel by China were developed
around 300 B.C. and 500 A.D. respectively. These methods were specialized, and so steel did not become
a major commodity until the 1850s.[42]

New methods of producing it by carburizing bars of iron in the cementation process were devised in the
17th century AD. In the Industrial Revolution, new methods of producing bar iron without charcoal were
devised and these were later applied to produce steel. In the late 1850s, Henry Bessemer invented a new
steelmaking process, involving blowing air through molten pig iron, to produce mild steel. This made
steel much more economical, thereby leading to wrought iron no longer being produced.[citation needed]

Recent discoveries

This section requires expansion.

 discovery of Mossbauer effect is best for iron


 many enzymes use iron in the catalytic center

379
 iron-56 is the end product of nuclear fusion; the most stable atomic nucleus of all elements
 superconductivity?
 magnetic effect
 ferrocene

Industrial production

See also: Iron ore

The production of iron or steel is a process containing two main stages, unless the desired product is cast
iron. The first stage is to produce pig iron in a blast furnace. Alternatively, it may be directly reduced. The
second is to make wrought iron or steel from pig iron by a further process.

The fining process of smelting iron ore to make wrought iron from pig iron, with the right illustration
displaying men working a blast furnace, from the Tiangong Kaiwu encyclopedia, published in 1637 by
Song Yingxing.

How iron was extracted in the 19th century

For a few limited purposes like electromagnet cores, pure iron is produced by electrolysis of a ferrous
sulfate solution[24]

Blast furnace

380
Main article: Blast furnace

Ninety percent of all mining of metallic ores is for the extraction of iron[citation needed]. Industrially, iron
production involves iron ores, principally hematite (nominally Fe2O3) and magnetite (Fe3O4) in a
carbothermic reaction (reduction with carbon) in a blast furnace at temperatures of about 2000 °C. In a
blast furnace, iron ore, carbon in the form of coke, and a flux such as limestone (which is used to remove
silicon dioxide impurities in the ore which would otherwise clog the furnace with solid material) are fed
into the top of the furnace, while a massive blast of heated air, about 4 tons per ton of iron,[41] is forced
into the furnace at the bottom.

Iron output in 2005

In the furnace, the coke reacts with oxygen in the air blast to produce carbon monoxide:

2 C + O2 → 2 CO

The carbon monoxide reduces the iron ore (in the chemical equation below, hematite) to molten iron,
becoming carbon dioxide in the process:

Fe2O3 + 3 CO → 2 Fe + 3 CO2

Some iron in the high-temperature lower region of the furnace reacts directly with the coke:

2 Fe2O3 + 3 C → 4 Fe + 3 CO2

The flux is present to melt impurities in the ore, principally silicon dioxide sand and other silicates.
Common fluxes include limestone (principally calcium carbonate) and dolomite (calcium-magnesium
carbonate). Other fluxes may be used depending on the impurities that need to be removed from the ore.
In the heat of the furnace the limestone flux decomposes to calcium oxide (also known as quicklime):

CaCO3 → CaO + CO2

Then calcium oxide combines with silicon dioxide to form a liquid slag.

CaO + SiO2 → CaSiO3

The slag melts in the heat of the furnace. In the bottom of the furnace, the molten slag floats on top of the
denser molten iron, and apertures in the side of the furnace are opened to run off the iron and the slag
separately. The iron, once cooled, is called pig iron, while the slag can be used as a material in road
construction or to improve mineral-poor soils for agriculture[41]

381
This heap of iron ore pellets will be used in steel production.

In 2005, approximately 1,544 million metric tons of iron ore were produced worldwide. According to the
British Geological Survey, China was the top producer of iron ore with at least one quarter world share,
followed by Brazil, Australia and India.

Direct iron reduction

Since coke is becoming more regulated due to environmental concerns, alternative methods of processing
iron have been developed. One of them is known as direct iron reduction.[41] It reduces iron ore to a
powder substance called sponge iron, which is suitable for steelmaking. There are two main reactions that
go on in the direct reduction process:

Natural gas is partially oxidized (with heat and a catalyst):

2 CH4 + O2 → 2 CO + 4 H2

These gases are then treated with iron ore in a furnace, producing solid sponge iron:

Fe2O3 + CO + 2 H2 → 2 Fe + CO2 + 2 H2O

Silica is removed by adding a flux, i.e. limestone, later.

Further processes

Main articles: Steelmaking and Ironworks

382
Iron-carbon phase diagram, various stable solid solution forms

Pig iron is not pure iron, but has 4–5% carbon dissolved in it with small amounts of other impurities like
sulfur, magnesium, phosphorus and manganese. As the carbon is the major impurity, the iron (pig iron)
becomes brittle and hard. This form of iron, also known as cast iron, is used to cast articles in foundries
such as stoves, pipes, radiators, lamp-posts and rails.

Alternatively pig iron may be made into steel (with up to about 2% carbon) or wrought iron
(commercially pure iron). Various processes have been used for this, including finery forges, puddling
furnaces, Bessemer converters, open hearth furnaces, basic oxygen furnaces, and electric arc furnaces. In
all cases, the objective is to oxidize some or all of the carbon, together with other impurities. On the other
hand, other metals may be added to make alloy steels.

The hardness of the steel depends upon its carbon content: the higher the percentage of carbon, the greater
the hardness and the lesser the malleability. The properties of the steel can also be changed by several
methods.

Annealing involves the heating of a piece of steel to 700–800 °C for several hours and then gradual
cooling. It makes the steel softer and more workable.

Steel may be hardened by cold working. The metal is bent or hammered into its final shape at a relatively
cool temperature. Cold forging is the stamping of a piece of steel into shape by a heavy press. Wrenches
are commonly made by cold forging. Cold rolling, which involves making a thinner but harder sheet, and
cold drawing, which makes a thinner but stronger wire, are two other methods of cold working. To harden
the steel, it is heated to red hot and then cooled by quenching it in the water. It becomes harder and more
brittle. If it is too hardened, it is then heated to a required temperature and allowed to cool. The steel thus
formed is less brittle.

383
Heat treatment is another way to harden steel. The steel is heated red hot, then cooled quickly. The iron
carbide molecules are decomposed by the heat, but do not have time to reform. Since the free carbon
atoms are stuck, it makes the steel much harder and stronger than before.[41]

Sometimes both toughness and hardness are desired. A process called case hardening may be used. Steel
is heated to about 900 °C in a bed of charcoal and/or nitrogen. They diffuse into the steel, making the
surface very hard. The surface cools quickly, but the inside cools slowly, making an extremely hard
surface and a durable, resistant inner layer.

Iron may be passivated by dipping it into a concentrated nitric acid solution. This forms a protective layer
of oxide on the metal, protecting it from further corrosion. When the metal is jarred, however, the layer is
broken, allowing the metal to corrode again.[24]

Applications

Metallurgical

Photon mass attenuation coefficient for iron.

Iron is the most widely used of all the metals, accounting for 95% of worldwide metal production. [citation
needed]
Its low cost and high strength make it indispensable in engineering applications such as the
construction of machinery and machine tools, automobiles, the hulls of large ships, and structural
components for buildings. Since pure iron is quite soft, it is most commonly used in the form of steel.

Commercially available iron is classified based on purity and the abundance of additives. Pig iron has
3.5–4.5% carbon[43] and contains varying amounts of contaminants such as sulfur, silicon and phosphorus.
Pig iron is not a saleable product, but rather an intermediate step in the production of cast iron and steel
from iron ore. Cast iron contains 2–4% carbon, 1–6% silicon, and small amounts of manganese.
Contaminants present in pig iron that negatively affect material properties, such as sulfur and phosphorus,
have been reduced to an acceptable level. It has a melting point in the range of 1420–1470 K, which is
lower than either of its two main components, and makes it the first product to be melted when carbon
and iron are heated together. Its mechanical properties vary greatly, dependent upon the form carbon
takes in the alloy.

"White" cast irons contain their carbon in the form of cementite, or iron carbide. This hard, brittle
compound dominates the mechanical properties of white cast irons, rendering them hard, but unresistant

384
to shock. The broken surface of a white cast iron is full of fine facets of the broken carbide, a very pale,
silvery, shiny material, hence the appellation.

In gray iron the carbon exists free as fine flakes of graphite, and also renders the material brittle due to the
stress-raising nature of the sharp edged flakes of graphite. A newer variant of gray iron, referred to as
ductile iron is specially treated with trace amounts of magnesium to alter the shape of graphite to
spheroids, or nodules, vastly increasing the toughness and strength of the material.

Wrought iron contains less than 0.25% carbon.[43] It is a tough, malleable product, but not as fusible as pig
iron. If honed to an edge, it loses it quickly. Wrought iron is characterized by the presence of fine fibers
of slag entrapped in the metal. Wrought iron is more corrosion resistant than steel. It has been almost
completely replaced by mild steel for traditional "wrought iron" products and blacksmithing.

Mild steel corrodes more readily that wrought iron, but is cheaper and more widely available. Carbon
steel contains 2.0% carbon or less,[44] with small amounts of manganese, sulfur, phosphorus, and silicon.
Alloy steels contain varying amounts of carbon as well as other metals, such as chromium, vanadium,
molybdenum, nickel, tungsten, etc. Their alloy content raises their cost, and so they are usually only
employed for specialist uses. One common alloy steel, though, is stainless steel. Recent developments in
ferrous metallurgy have produced a growing range of microalloyed steels, also termed 'HSLA' or high-
strength, low alloy steels, containing tiny additions to produce high strengths and often spectacular
toughness at minimal cost.

Apart from traditional applications, iron is also used for protection from ionizing radiation. Although it is
lighter than another traditional protection material, lead, it is much stronger mechanically. The attenuation
of radiation as a function of energy is shown in the graph.

The main disadvantage of iron and steel is that pure iron, and most of its alloys, suffer badly from rust if
not protected in some way. Painting, galvanization, passivation, plastic coating and bluing are all used to
protect iron from rust by excluding water and oxygen or by cathodic protection.

Of compounds

Although its metallurgical role is dominant in terms of amounts, iron compounds are pervasive in
industry as well being used in many niche uses. Iron catalysts are traditionally used in the Haber-Bosch
Process for the production of ammonia and the Fischer-Tropsch process for conversion of carbon
monoxide to hydrocarbons for fuels and lubricants.[45] Powdered iron in an acidic solvent was used in the
Bechamp reduction the reduction of nitrobenzene to aniline.[46]

Iron(III) chloride finds use in water purification and sewage treatment, in the dyeing of cloth, as a
coloring agent in paints, as an additive in animal feed, and as an etchant for copper in the manufacture of
printed circuit boards.[47] It can also be dissolved in alcohol to form tincture of iron. [24] The other halides
tend to be limited to laboratory uses.

Iron(II) sulfate is used as a precursor to other iron compounds. It is also used to reduce chromate in
cement. It is used to fortify foods and treat iron deficiency anemia. These are its main uses. Iron(III)
sulfate is used in settling minute sewage particles in tank water. Iron(II) chloride is used as a reducing
flocculating agent, in the formation of iron complexes and magnetic iron oxides, and as a reducing agent
in organic synthesis.

Uptake and storage

385
In cells, iron storage is carefully regulated; "free" iron ions do not exist as such. A major component of
this regulation is the protein transferrin, which binds iron ions absorbed from the duodenum and carries it
in the blood to cells.[48] In animals, plants, and fungi, iron is often the metal ion incorporated into the
heme complex. Heme is an essential component of cytochrome proteins, which mediate redox reactions,
and of oxygen carrier proteins such as hemoglobin, myoglobin, and leghemoglobin. Inorganic iron also
contributes to redox reactions in the iron-sulfur clusters of many enzymes, such as nitrogenase (involved
in the synthesis of ammonia from nitrogen and hydrogen) and hydrogenase. Non-heme iron proteins
include the enzymes methane monooxygenase (oxidizes methane to methanol), ribonucleotide reductase
(reduces ribose to deoxyribose; DNA biosynthesis), hemerythrins (oxygen transport and fixation in
Marine invertebrates) and purple acid phosphatase (hydrolysis of phosphate esters).

Iron distribution is heavily regulated in mammals, partly because iron ions have a high potential for
biological toxicity.[49]

Iron acquisition poses a problem for aerobic organisms because ferric iron is poorly soluble near neutral
pH. Thus, bacteria have evolved high-affinity sequestering agents called siderophores.[50][51][52]

Biological role

Iron is abundant in biology. Iron-proteins are found in all living organisms, ranging from the
evolutionarily primitive archaea to humans. The color of blood is due to the hemoglobin, an iron-
containing protein. As illustrated by hemoglobin, iron often is bound to cofactors, e.g. in hemes. The iron-
sulfur clusters are pervasive and include nitrogenase, the enzymes responsible for biological nitrogen
fixation.(see #Biological role). Influential theories of evolution have invoked a role for iron sulfides, iron-
sulfur world theory.[citation needed]

Structure of Heme b, in the protein additional ligand(s) would be attached to Fe.

Main article: Human iron metabolism

Iron is a necessary trace element found in nearly all living organisms. Iron-containing enzymes and
proteins, often containing heme prosthetic groups, participate in many biological oxidations and in

386
transport. Examples of proteins found in higher organisms include hemoglobin, cytochrome (see high-
valent iron), and catalase.[53]

Bioinorganic compounds

The most famous bioinorganic compounds of iron are heme proteins: hemoglobin, myoglobin, and
cytochrome P450. These compounds can transport gases, build enzymes ,and be used in transferring
electrons. Metalloproteins are a group of proteins with metal ion cofactors. Some examples of iron
metalloproteins are ferritin and rubredoxin. Many enzymes vital to life contain iron, such as catalase,
lipoxygenases,and IRE-BP.

Health and diet

Main article: Iron deficiency (medicine)

Iron is pervasive, but particularly rich sources of dietary iron include red meat, lentils, beans, poultry,
fish, leaf vegetables, tofu, chickpeas, black-eyed peas, blackstrap molasses, fortified bread, and fortified
breakfast cereals. Iron in low amounts is found in molasses, teff and farina. Iron in meat (heme iron) is
more easily absorbed than iron in vegetables.[54] Although most studies suggest that heme/hemoglobin
from red meat has effects which may increase the likelihood of colorectal cancer,[55][56] there is still some
controversy,[57] and even a few studies suggesting that there is not enough evidence to support such
claims.[58]

Iron provided by dietary supplements is often found as iron(II) fumarate, although iron sulfate is cheaper
and is absorbed equally well. Elemental iron, or reduced iron, despite being absorbed at only one third to
two thirds the efficiency (relative to iron sulfate),[59] is often added to foods such as breakfast cereals or
enriched wheat flour. Iron is most available to the body when chelated to amino acids[60] and is also
available for use as a common iron supplement. Often the amino acid chosen for this purpose is the
cheapest and most common amino acid, glycine, leading to "iron glycinate" supplements.[61] The
Recommended Dietary Allowance (RDA) for iron varies considerably based on age, gender, and source
of dietary iron (heme-based iron has higher bioavailability).[62] Infants may require iron supplements if
they are bottle-fed cow's milk.[63] Blood donors and pregnant women are at special risk of low iron levels
and are often advised to supplement their iron intake.[64]

Regulation of uptake

Main article: Hepcidin

Iron uptake is tightly regulated by the human body, which has no regulated physiological means of
excreting iron. Only small amounts of iron are lost daily due to mucosal and skin epithelial cell sloughing,
so control of iron levels is mostly by regulating uptake.[65] Regulation of iron uptake is impaired in some
people as a result of a genetic defect that maps to the HLA-H gene region on chromosome 6. In these
people, excessive iron intake can result in iron overload disorders, such as hemochromatosis. Many
people have a genetic susceptibility to iron overload without realizing it or being aware of a family
history of the problem. For this reason, it is advised that people do not take iron supplements unless they
suffer from iron deficiency and have consulted a doctor. Hemochromatosis is estimated to cause disease
in between 0.3 and 0.8% of Caucasians.[66]

387
MRI finds that iron accumulates in the hippocampus of the brains of those with Alzheimer's disease and
in the substantia nigra of those with Parkinson disease.[67]

Precautions

Main article: Iron poisoning

Large amounts of ingested iron can cause excessive levels of iron in the blood. High blood levels of free
ferrous iron react with peroxides to produce free radicals, which are highly reactive and can damage
DNA, proteins, lipids, and other cellular components. Thus, iron toxicity occurs when there is free iron in
the cell, which generally occurs when iron levels exceed the capacity of transferrin to bind the iron.
Damage to the cells of the gastrointestinal tract can also prevent them from regulating iron absorption
leading to further increases in blood levels. Iron typically damages cells in the heart, liver and elsewhere,
which can cause significant adverse effects, including coma, metabolic acidosis, shock, liver failure,
coagulopathy, adult respiratory distress syndrome, long-term organ damage, and even death.[68] Humans
experience iron toxicity above 20 milligrams of iron for every kilogram of mass, and 60 milligrams per
kilogram is considered a lethal dose.[69] Overconsumption of iron, often the result of children eating large
quantities of ferrous sulfate tablets intended for adult consumption, is one of the most common
toxicological causes of death in children under six.[69] The Dietary Reference Intake (DRI) lists the
Tolerable Upper Intake Level (UL) for adults as 45 mg/day. For children under fourteen years old the UL
is 40 mg/day.

The medical management of iron toxicity is complicated, and can include use of a specific chelating agent
called deferoxamine to bind and expel excess iron from the body.[68][70]

388
Nonmetal

From Wikipedia, the free encyclopedia

Jump to: navigation, search

Nonmetal, or non-metal, is a term used in chemistry when classifying the chemical elements. On the
basis of their general physical and chemical properties, every element in the periodic table can be termed
either a metal or a nonmetal. (A few elements with intermediate properties are referred to as metalloids).

The elements generally regarded as nonmetals are:

 hydrogen (H)
 In Group 14: carbon (C)
 In Group 15 (the pnictogens): nitrogen (N), phosphorus (P)
 Several elements in Group 16, the chalcogens: oxygen (O), sulfur (S), selenium (Se)
 All elements in Group 17 - the halogens
 All elements in Group 18 - the noble gases

There is no rigorous definition for the term "nonmetal" - it covers a general spectrum of behaviour.
Common properties considered characteristic of a nonmetal include:

 poor conductors of heat and electricity when compared to metals


 they form acidic oxides (whereas metals generally form basic oxides)
 in solid form, they are dull and brittle, rather than metals which are lustrous, ductile or malleable
 usually have lower densities than metals
 they have significantly lower melting points and boiling points than metals (with the exception of
Carbon)
 non-metals have high electronegativity

They also have a negative valence, compared to the positive valence of metals.

389
Only eighteen elements in the periodic table are generally considered nonmetals, compared to over eighty
metals, but nonmetals make up most of the crust, atmosphere and oceans of the earth. Bulk tissues of
living organisms are composed almost entirely of nonmetals. Most nonmetals are monatomic noble gases
or form diatomic molecules in their elemental state, unlike metals which (in their elemental state) do not
form molecules at all.

[edit] Metallisation at huge pressures

Nevertheless, even these 18 elements tend to become metallic at large enough pressures (see nearby
periodic table at ~300 GPa).

[show]v · d · ePeriodic tables

[hide]v · d · e Periodic table


H He
B
Li B C N O F Ne
e
M
Na Al Si P S Cl Ar
g
C S T M F C N C Z
K V Cr Ga Ge As Se Br Kr
a c i n e o i u n
Z N M R R P A C
Rb Sr Y Tc In Sn Sb Te I Xe
r b o u h d g d
B L C P N P S G T D H T Y L HT R O A H
Cs Eu Er W Ir Pt Tl Pb Bi Po At Rn
a a e r d m m d b y o m b u f a e s u g
R A T P N A C B C E F M N L R D B H M D R C U Uu Uu Uu Uu Uu
Fr U Pu Sg
a c h a p m m k f s m d o r f b h s t s g n ut q p h s o
Alkaline
Alkali Transition Other Other Noble
earth Lanthanides Actinides Metalloids Halogens
metals metals metals nonmetals gases
metals

390
Large version

Sulfur

From Wikipedia, the free encyclopedia

Jump to: navigation, search

This article is about the chemical element. For other uses, see Sulfur (disambiguation).

phosphorus ← sulfur → chlorine

O

S

Se

16S

Periodic table

Appearance

Lemon yellow microcrystals sintered into cake

391
Spectral lines of Sulfur

General properties

Name, symbol, number sulfur, S, 16

Pronunciation /ˈsʌlfər/ SUL-fər

Element category nonmetal

Group, period, block 16, 3, p

Standard atomic weight 32.065g·mol−1

Electron configuration [Ne] 3s2 3p4

Electrons per shell 2, 8, 6 (Image)

Physical properties

Phase solid

Density (near r.t.) (alpha) 2.07 g·cm−3

Density (near r.t.) (beta) 1.96 g·cm−3

Density (near r.t.) (gamma) 1.92 g·cm−3

Liquid density at m.p. 1.819 g·cm−3

392
Melting point 388.36 K115.21 ° ,C239.38 ° ,F

Boiling point 717.8 K444.6 ° ,C832.3 ° ,F

Critical point 1314 K, 20.7 MPa

Heat of fusion (mono) 1.727 kJ·mol−1

Heat of vaporization (mono) 45 kJ·mol−1

Specific heat capacity (25 °C) 22.75 J·mol−1·K−1

Vapor pressure

P (Pa) 1 10 100 1k 10 k 100 k

at T (K) 375 408 449 508 591 717

Atomic properties

6, 5, 4, 3, 2, 1, -1, -2
Oxidation states
(strongly acidic oxide)

Electronegativity 2.58 (Pauling scale)

Ionization energies 1st: 999.6 kJ·mol−1


(more)
2nd: 2252 kJ·mol−1

3rd: 3357 kJ·mol−1

Covalent radius 105±3 pm

Van der Waals radius 180 pm

Miscellanea

Crystal structure orthorhombic

Magnetic ordering diamagnetic[1]

393
(20 °C) (amorphous)
Electrical resistivity
2×1015Ω·m

(300 K) (amorphous)
Thermal conductivity
0.205 W·m−1·K−1

Bulk modulus 7.7 GPa

Mohs hardness 2.0

CAS registry number 7704-34-9

Most stable isotopes

Main article: Isotopes of sulfur

iso NA half-life DM DE (MeV) DP


32 32
S 95.02% S is stable with 16 neutrons
33 33
S 0.75% S is stable with 17 neutrons
34 34
S 4.21% S is stable with 18 neutrons

35
S syn 87.32 d β− 0.167 35
Cl
36 36
S 0.02% S is stable with 20 neutrons

v·d·e

Sulfur or sulphur ( /ˈsʌlfər/ SUL-fər; see spelling below) is the chemical element that has the atomic
number 16. It is denoted with the symbol S. It is an abundant, multivalent non-metal. Sulfur, in its native
form, is a bright yellow crystalline solid. In nature, it can be found as the pure element and as sulfide and
sulfate minerals. It is an essential element for life and is found in two amino acids: cysteine and
methionine. Its commercial uses are primarily in fertilizers, but it is also widely used in black gunpowder,
matches, insecticides and fungicides. Elemental sulfur crystals are commonly sought after by mineral
collectors for their brightly colored polyhedron shapes. In nonscientific contexts, it can also be referred to
as brimstone the burning stone.[2]

Contents

[hide]

 1 Characteristics

394
o 1.1 Physical
o 1.2 Chemical
o 1.3 Allotropes
o 1.4 Isotopes
o 1.5 Natural occurrence
 2 Production
 3 Compounds
o 3.1 Sulfides
o 3.2 Oxides and oxyanions
o 3.3 Halides and oxyhalides
o 3.4 Pnictnides
o 3.5 Metal sulfides
o 3.6 Organic compounds
 4 History
o 4.1 Antiquity
o 4.2 Modern times
o 4.3 Spelling and etymology
 5 Applications
o 5.1 Sulfuric acid
o 5.2 Other large scale sulfur chemicals
o 5.3 Fertilizer
o 5.4 Fine chemicals
o 5.5 Fungicide and pesticide
 6 Biological role
o 6.1 Protein and organic cofactors
o 6.2 Metalloproteins and inorganic cofactors
o 6.3 Sulfur metabolism
 7 Precautions
 8 See also
 9 References
 10 External links

[edit] Characteristics

395
When burned, sulfur melts to a blood-red liquid and emits a blue flame which is best observed in the dark.

[edit] Physical

At room temperature, sulfur is a soft, bright-yellow solid with only a faint odor, similar to that of matches
(the strong "smell of sulfur" usually refers to the odor of hydrogen sulfide (H2S) or organosulfur
compounds). Sulfur is an electrical insulator. It melts slightly above 100 °C and easily sublimes.[2]

[edit] Chemical

Sulfur burns with a blue flame that emits sulfur dioxide, notable for its peculiar suffocating odor. Sulfur is
insoluble in water, but soluble in carbon disulfide;— and to a lesser extent in other non-polar organic
solvents such as benzene and toluene. Sulfur in the solid state ordinarily exists as cyclic crown-shaped S8
molecules. The crystallography of sulfur is complex. Depending on the specific conditions, the sulfur
allotropes form several crystal structures, with rhombic and monoclinic S8 best known.[2]

Unlike most other liquids, molten sulfur increases in viscosity with temperatures of 200 °C (392 °F) due
to the formation of polymers. The molten sulfur assumes a dark red color above this temperature. At still
higher temperatures, however, the viscosity is decreased as depolymerization occurs.

Amorphous or "plastic" sulfur can be produced through the rapid cooling of molten sulfur. X-ray
crystallography studies show that the amorphous form may have a helical structure with eight atoms per
turn. This form is metastable at room temperature and gradually reverts to crystalline form. This process
happens within a matter of hours to days but can be rapidly catalyzed.

[edit] Allotropes

396
The structure of the cyclooctasulfur molecule, S8.

Main article: Allotropes of sulfur

Sulfur forms more than 30 solid allotropes, more than any other element.[3] Besides S8, several other rings
are known.[4] Removing one atom from the crown gives S7, which is more deeply yellow than S8. HPLC
analysis of "elemental sulfur" reveals an equilibrium mixture of mainly S8, but also S7 and small amounts
of S6.[5] Larger rings have been prepared, including S12 and S18.[6][7] By contrast, sulfur's lighter neighbor
oxygen only exists in two states of allotropic significance: O2 and O3. Selenium, the heavier analogue of
sulfur, can form rings but is more often found as a polymer chain.

[edit] Isotopes

Main article: Isotopes of sulfur

Sulfur has 25 known isotopes, four of which are stable: 32S (95.02%), 33S (0.75%), 34S (4.21%), and 36S
(0.02%). Other than 35S, the radioactive isotopes of sulfur are all short lived. 35S is formed from cosmic
ray spallation of 40argon in the atmosphere. It has a half-life of 87 days.

When sulfide minerals are precipitated, isotopic equilibration among solids and liquid may cause small
differences in the δS-34 values of co-genetic minerals. The differences between minerals can be used to
estimate the temperature of equilibration. The δC-13 and δS-34 of coexisting carbonates and sulfides can
be used to determine the pH and oxygen fugacity of the ore-bearing fluid during ore formation.

In most forest ecosystems, sulfate is derived mostly from the atmosphere; weathering of ore minerals and
evaporites also contribute some sulfur. Sulfur with a distinctive isotopic composition has been used to
identify pollution sources, and enriched sulfur has been added as a tracer in hydrologic studies.
Differences in the natural abundances can also be used in systems where there is sufficient variation in the
34
S of ecosystem components. Rocky Mountain lakes thought to be dominated by atmospheric sources of
sulfate have been found to have different δ34S values from lakes believed to be dominated by watershed
sources of sulfate.

[edit] Natural occurrence

397
Most of the yellow and orange hues of Io are due to elemental sulfur and sulfur compounds, produced by
active volcanoes.

Native sulfur crystals

The most common sulfur isotope in nature, sulfur-32, is created in extremely large, extremely hot (over
2.5 billion kelvin) stars. This requires fusion of one nucleus of silicon plus one nucleus of helium. [8] For
details of the process, see silicon burning. Partly because this so-called alpha process produces elements
in abundance, sulfur is the 10th most common element in the universe.

The distinctive colors of Jupiter's volcanic moon, Io, are from various forms of molten, solid and gaseous
sulfur.

There is also a dark area near the crater Aristarchus on Earth's Moon, that has been suggested to be a
sulfur deposit.

Sulfur is present in many types of meteorites. Ordinary chondrites contain on average 2.1% sulfur, and
carbonaceous chondrites may contain as much as 6.6%. Sulfur in meteorites is normally present as troilite
(FeS), but other sulfides are found in some meteorites, and carbonaceous chondrites contain free sulfur,
sulfates, and possibly other sulfur compounds.[9]

398
A man carrying sulfur blocks from Kawah Ijen, a volcano in East Java, Indonesia (photo 2009)

On Earth, elemental sulfur can be found near hot springs and volcanic regions in many parts of the world,
especially along the Pacific Ring of Fire. Such volcanic deposits are currently mined in Indonesia, Chile,
and Japan. Sicily is also famous for its sulfur mines. Sulfur deposits are polycrystalline, and the largest
documented single crystal measured 22×16×11 cm.[10][11]

Significant deposits of elemental sulfur also exist in salt domes along the coast of the Gulf of Mexico, and
in evaporites in eastern Europe and western Asia. The sulfur in these deposits is believed to come from
the action of anaerobic bacteria on sulfate minerals, especially gypsum, although apparently native sulfur
may be produced by geological processes alone, without the aid of living organisms (see below).
However, fossil-based sulfur deposits from salt domes have, until recently, been the basis for commercial
production in the United States, Poland, Russia, Turkmenistan, and Ukraine.[12] As noted below, such
sources are now of secondary commercial importance, and most are no longer worked.

Common naturally-occurring sulfur compounds include the sulfide minerals, such as pyrite (iron sulfide),
cinnabar (mercury sulfide), galena (lead sulfide), sphalerite (zinc sulfide) and stibnite (antimony sulfide);
and the sulfates, such as gypsum (calcium sulfate), alunite (potassium aluminium sulfate), and barite
(barium sulfate). It occurs naturally in volcanic emissions, such as from hydrothermal vents, and from
bacterial action on decaying sulfur-containing organic matter.

[edit] Production

Sulfur may be found as a pure mineral ("native" sulfur) and historically was usually obtained in this way.
However, today's sulfur production is as a side product of other industrial processes, such as oil refining.
In these processes, sulfur often occurs as sulfur compounds, which are then converted to elemental sulfur.

As a mineral, native sulfur under salt domes is thought to be a fossil mineral resource, produced by the
action of ancient bacteria on sulfate deposits. It was removed from such salt-dome mines mainly by the
Frasch process.[12] In this method, superheated water was pumped into a native sulfur deposit to melt the
sulfur, and then compressed air returned the relatively pure (99.5%) melted product to the surface.
Throughout the 20th century this procedure produced elemental sulfur which required no further
purification. Howevever, due to a limited number of such sulfur deposits and the high cost of working
them, this process for mining sulfur has not been employed in a major way anywhere in the world, since
2002.

399
Sulfur recovered from hydrocarbons in Alberta, stockpiled for shipment in North Vancouver, B.C.

Today, sulfur is produced from petroleum, natural gas, and related fossil resources, from which it is
obtained mainly as hydrogen sulfide (a gas). organosulfur compounds, which are undesirable impurities
in petroleum, may be upgraded by subjecting them to hydrodesulfurization, which cleaves the C-S bonds:

R-S-R + 2 H2 → 2 RH + H2S

The resulting hydrogen sulfide from this process, and also as it occurs in natural gas, is converted into
elemental sulfur by the Claus process This process entails oxidation of some hydrogen sulfide to sulfur
dioxide and then the comproportionation of the two:

1.5 O2 + H2S → SO2 + H2O

SO2 + 2 H2S → 3 S + 2 H2O

Many sour gas streams (gases containing some H2S) are treated to remove the sulfur, e.g. Shell-Paques
sulfide removal/sulfur recovery process. Owing to the high sulfur content of the Athabasca Oil Sands,
stockpiles of elemental sulfur from this process now exist throughout Alberta, Canada.

[edit] Compounds

See also Category: sulfur compounds

Common oxidation states of sulfur range from −2 to +6. Sulfur forms stable compounds with all elements
except the noble gases.

[edit] Sulfides

Treatment of sulfur with hydrogen gives hydrogen sulfide. When dissolved in water, hydrogen sulfide is
mildly acidic:[2]

H2S HS- + H+

Reduction of elemental sulfur gives polysulfides, which consist of chains of sulfur atoms terminated with
S- centres:

2 Na + S8 → Na2S8

400
This reaction highlights arguably the single most distinctive property of sulfur: its ability to catenate (bind
to itself by formation of chains). Protonation of these polysulfide anions gives the polysulfanes, H 2Sx
where x = 2, 3, and 4.[13] Ultimately reduction of sulfur gives sulfide salts:

16 Na + S8 → 8 Na2S

The interconversion of these species is exploited in the sodium-sulfur battery. The radical anion S2- gives
the blue color to the mineral lapis lazuli.

Lapis lazuli owes its blue color to a sulfur radical.

Elemental sulfur can also be oxidized, for example to give bicyclic S82+.

[edit] Oxides and oxyanions

The principal sulfur oxides are obtained by burning sulfur:

S + O2 → SO2

SO2 + 1/2 O2 → SO3

Other oxides are known, e.g. sulfur monoxide and disulfur mono- and dioxides, but they are unstable.

The sulfur oxides form numerous oxyanions with the formula SOn2-. Sulfur dioxide and sulfites (SO2−
3) are related to the unstable sulfurous acid (H2SO3). Sulfur trioxide and sulfates (SO2−
4) are related to sulfuric acid. Sulfuric acid and SO3 combine to give oleum, a solution of pyrosulfuric
acid (H2S2O7) in sulfuric acid.

401
Peroxydisulfuric acid

Peroxides convert sulfur into unstable such as S8O, a sulfoxide. Peroxymonosulfuric acid (H2SO5) and
peroxydisulfuric acids (H2S2O8), made from the action of SO3 on concentrated H2O2, and H2SO4 on
concentrated H2O2 respectively.

The sulfate anion, SO2−


4

Thiosulfate salts (S2O2−


3), sometimes referred as "hyposulfites", used in photographic fixing (HYPO) and as reducing agents,
feature sulfur in two oxidation states. Sodium dithionite, (S2O2−
4), is more highly reducing dianion. Sodium dithionate (Na2S2O6) is the first member of the polythionic
acids (H2SnO6), where n can range from 3 to many.

[edit] Halides and oxyhalides

Sulfur hexafluoride, SF6 is a dense gas that is used as nonreactive and nontoxic propellant. In contrast,
sulfur tetrafluoride is a rarely used reagent that is highly toxic. The two main sulfur chlorides are sulfur
dichloride (SCl2) and sulfur monochloride (S2Cl2). Sulfuryl chloride (SO2Cl2) and chlorosulfuric acid
(ClSO3H) are derivatives of sulfuric acid. Thionyl chloride (SOCl2) is a reagent in organic synthesis.

[edit] Pnictnides

The most important S-N compound is the cage tetrasulfur tetranitride (S4N4). Heating this compound
gives Polymeric sulfur nitride ((SN)x), which has metallic properties even though it does not contain any
metal atoms. Thiocyanates contain the SCN− group. Oxidation of thiocyanate gives thiocyanogen, (SCN)2
with the connectivity NCS-SCN. Phosphorus sulfides are numerous, the most important commercially
being the cages P4S10 and P4S3.[14][15]

402
[edit] Metal sulfides
Main article: Sulfide mineral

Many if not most minerals occur as sulfides. The principal ores of copper, zinc, nickel, cobalt,
molybdenum and others are sulfides. These materials tend to be dark-colored semiconductors that are not
readily attacked by water or even many acids. They are formed, both geochemically and in the laboratory,
by the reaction of hydrogen sulfide with metal salts to form the metal sulfides. The mineral Galena (PbS)
was the first demonstrated semiconductor and found a use as a signal rectifier in the cat's whiskers of
early crystal radios. The iron sulfide called pyrite, the so-called "fool's gold," has the formula Fe>S2.[16]
The upgrading of these ores, usually by roasting, is costly and environmentally hazardous. Sulfur
corrodes many metals via the process called tarnishing.

[edit] Organic compounds

Main article: organosulfur compounds

Illustrative organosulfur compounds

Allicin, the active ingredient Methionine, an amino


in garlic R-cysteine, an aminoacid containing aDiphenyl disulfide, a
acid containing a thiolthioether representative disulfide
group

Perfluorooctanesulfonic acid,
a controversial surfactant Dibenzothiophene, a
component of crude oil Penicillin

Some of the main classes of sulfur-containing organic compounds include the following (R, R', and R are
organic groups such as CH3):[17]

 Thioethers have the form R-S-R′. These compounds are the sulfur equivalents of ethers.
 Sulfonium ions have the formula RR'R"S+, i.e. where three groups are attached to the cationic
sulfur center. Dimethylsulfoniopropionate (DMSP; (CH3)2S+CH2CH2COO−) is a sulfonium ion,
which is important in the marine organic sulfur cycle.
 Thiols (also known as mercaptans) have the form R-SH. These are the sulfur equivalents of
alcohols. Treatment of thiols with base gives thiolates ions (R-S-.
 Sulfoxides and Sulfones have the form R-S(=O)-R′ and R-S(=O)(=O)-R′. The simplest sulfoxide,
DMSO, is a common solvent. A common sulfone is sulfolane C4H8SO2.
 Sulfonic acids (R-SO3- are used in many detergents.

403
Organosulfur compounds are responsible for the unpleasant odors of decaying organic matter. Thiols and
sulfides are used in the odoration of natural gas, notably, t-butyl mercaptan. The odor of garlic and "skunk
stink" are also caused by organosulfur compounds. Not all organic sulfur compounds smell unpleasant;
for example, grapefruit mercaptan, a sulfur-containing monoterpenoid is responsible for the characteristic
scent of grapefruit. It should be noted that this thiol is present in very low concentrations. In larger
concentrations, the odor of this compound is that typical of all thiols.

Inorganic carbon-sulfur compounds are also well known. Carbon disulfide (CS2) is a volatile liquid that is
structurally similar to carbon dioxide. It is used to make polymers. Whereas carbon monoxide is a highly
stable gas, carbon monosulfide (CS) is a laboratory curiosity with only a fleeting existence.

[edit] History

[edit] Antiquity

Being abundantly available in native form, sulfur (Sanskrit, ਬਬਬਬਬ sulvari; Latin Sulphurium) was
known in ancient times and is referred to in the Torah (Genesis). English translations of the Bible
commonly referred to burning sulfur as "brimstone", giving rise to the name of 'fire-and-brimstone'
sermons, in which listeners are reminded of the fate of eternal damnation that await the unbelieving and
unrepentant. It is from this part of the Bible that Hell is implied to "smell of sulfur" (likely due to its
association with volcanic activity). According to the Ebers Papyrus, a sulfur ointment was used in ancient
Egypt to treat granular eyelids. Sulfur was used for fumigation in preclassical Greece;[18] this is mentioned
in the Odyssey.[19] Pliny the Elder discusses sulfur in book 35 of his Natural History, saying that its best-
known source is the island of Melos. He also mentions its use for fumigation, medicine, and bleaching
cloth.[20]

A natural form of sulfur known as shiliuhuang was known in China since the 6th century BC and found in
Hanzhong.[21] By the 3rd century, the Chinese discovered that sulfur could be extracted from pyrite.[21]
Chinese Daoists were interested in sulfur's flammability and its reactivity with certain metals, yet its
earliest practical uses were found in traditional Chinese medicine.[21] A Song Dynasty military treatise of
1044 AD described different formulas for Chinese black powder, which is a mixture of potassium nitrate
(KNO3), charcoal, and sulfur. Early alchemists gave sulfur its own alchemical symbol which was a
triangle at the top of a cross.

In traditional medical skin treatment which predates modern era of scientific medicine, elemental sulfur
has been used mainly as part of creams to alleviate various conditions such as scabies, ringworm,
psoriasis, eczema and acne. The mechanism of action is not known, although elemental sulfur does
oxidize slowly to sulfurous acid, which in turn (through the action of sulfite) acts as a mild reducing and
antibacterial agent.

[edit] Modern times

In 1777, Antoine Lavoisier helped convince the scientific community that sulfur was an element and not a
compound. The Sicilian process was used in ancient times to obtain sulfur from rocks present in volcanic
regions of Sicily. In this process, the sulfur deposits are piled and stacked in brick kilns built on sloping
hillsides, and with airspaces between them. Then powdered sulfur is put on top of the sulfur deposit and
ignited. As the sulfur burns, the heat melts the sulfur deposits, causing the molten sulfur to flow down the
sloping hillside.

404
Sicilian kiln used to obtain sulfur from volcanic rock.

In 1867, sulfur was discovered in underground deposits in Louisiana and Texas. The highly successful
Frasch process was developed to extract this resource.[22]

In the late 18th century, furniture makers used molten sulfur to produce decorative inlays in their craft.
Because of the sulfur dioxide produced during the process of melting sulfur, the craft of sulfur inlays was
soon abandoned. Molten sulfur is sometimes still used for setting steel bolts into drilled concrete holes
where high shock resistance is desired for floor-mounted equipment attachment points. Pure powdered
sulfur was also used as a medicinal tonic and laxative.[12]

[edit] Spelling and etymology

The element was traditionally spelt sulphur in the United Kingdom (since the 14th century),[23] most of
the Commonwealth including India, Malaysia, South Africa, and Hong Kong, along with the rest of the
Caribbean and Ireland. Sulfur is used in the United States, while both spellings are used in Canada and the
Philippines.

However, the IUPAC adopted the spelling sulfur in 1990, as did the Royal Society of Chemistry
Nomenclature Committee in 1992.[24] The Qualifications and Curriculum Authority for England and
Wales recommended its use in 2000,[25] and it now appears in GCSE exams.[26] The Oxford Dictionaries
note that "In chemistry... the -f- spelling is now the standard form in all related words in the field in both
British and US contexts"[27]

In Latin, the word is variously written sulpur, sulphur, and sulfur (the Oxford Latin Dictionary lists the
spellings in this order). It is an original Latin name and not a Classical Greek loan, so the ph variant does
not denote the Greek letter φ (phi). Sulfur in Greek is thion (θείον), whence comes the prefix thio-. The
simplification of the Latin words p or ph to an f appears to have taken place towards the end of the
classical period.[28][29]

[edit] Applications

[edit] Sulfuric acid

405
Elemental sulfur is mainly used as a precursor to other chemicals. Approximately 85% (1989) is
converted to sulfuric acid (H2SO4):

2 S + 3 O2 + 2 H2O → 2 H2SO4

With sulfuric acid being central importance to the world's economies, its production and consumption is
an indicator of a nation's industrial development.[30] For example with 36.1 million metric tons in 2007,
more sulfuric acid is produced in the United States every year than any other inorganic industrial
chemical.[31] The principal use for the acid is the extraction of phosphate ores for the production of
fertilizer manufacturing. Other applications of sulfuric acid include oil refining, wastewater processing,
and mineral extraction.[12]

Sulfuric acid production in 2000

[edit] Other large scale sulfur chemicals

Sulfur reacts directly with methane to give carbon disulfide, which is used to manufacture cellophane and
rayon.[12] One of the direct uses of sulfur is in vulcanization of rubber, where polysulfides crosslink
organic polymers. Sulfites are heavily used to bleach paper. Sulfites are also used as preservatives in dried
fruit. Many surfactants and detergents, e.g. sodium lauryl sulfate, are produced are sulfate derivatives.
Calcium sulfate, gypsum, (CaSO4.2H2O) is mined on the scale of 100 million tons each year for use in
Portland cement and fertizers.

When silver-based photography was widespread, sodium and ammonium thiosulfate were widely used as
"fixing agents." Sulfur is a component of gunpowder.

[edit] Fertilizer

406
Sulfur is increasingly used as a component of fertilizers. The most important form of sulfur for fertilizer is
the mineral calcium sulfate. Elemental sulfur is hydrophobic (that is, it is not soluble in water) and
therefore cannot be taken up by the plants instantly. Soil bacteria convert it to soluble derivatives. Sulfur
also improves the use efficiency of other essential plant nutrients, particularly nitrogen and phosphorus.[32]
Biologically produced sulfur particles are naturally hydrophilic due to a biopolymer coating. This sulfur is
therefore easier to disperse over the land (via spraying as a diluted slurry), and results in a faster release.

Plant requirements for sulfur are equal to or exceed those for phosphorus. It is one of the major nutrients
essential for plant growth, root nodule formation of legumes and plants protection mechanisms. Sulfur
deficiency has become widespread in many countries in Europe.[33][34][35] Because atmospheric inputs of
sulfur will continue to decrease, the deficit in the sulfur input/output is likely to increase, unless sulfur
fertilizers are used.

[edit] Fine chemicals

A molecular model of the pesticide malathion.

Organosulfur compounds are also used in pharmaceuticals, dyestuffs, and agrichemicals. Many drugs
contain sulfur, early examples being the sulfa drugs. Sulfur is a part of many bacterial defense molecules.
Most beta lactam antibiotics, including the penicillins, cephalosporins, and monolactams contain
sulfur.[17]

Magnesium sulfate, better known as Epsom salts, can be used as a laxative, a bath additive, an exfoliant,
magnesium supplement for plants, or a desiccant.

[edit] Fungicide and pesticide

Elemental sulfur is one of the oldest fungicides and pesticides. Dusting sulfur, elemental sulfur in
powdered form, is a common fungicide for grapes, strawberry, many vegetables and several other crops.
It has a good efficacy against a wide range of powdery mildew diseases as well as black spot. In organic
production, sulfur is the most important fungicide. It is the only fungicide used in organically farmed
apple production against the main disease apple scab under colder conditions. Biosulfur (biologically
produced elemental sulfur with hydrophilic characteristics) can be used well for these applications.

Standard-formulation dusting sulfur is applied to crops with a sulfur duster or from a dusting plane.
Wettable sulfur is the commercial name for dusting sulfur formulated with additional ingredients to make
it water miscible.[36] It has similar applications, and is used as a fungicide against mildew and other mold-
related problems with plants and soil.

407
Sulfur is also used as an "organic" (i.e. "green") insecticide (actually an acaricide) against ticks and mites.
A common method of use is to dust clothing or limbs with sulfur powder. Some livestock owners set out
a sulfur salt block as a salt lick.[citation needed]

[edit] Biological role

[edit] Protein and organic cofactors

Sulfur is an essential component of all living cells. In plants and animals the amino acids cysteine and
methionine contain sulfur, as do all polypeptides, proteins, and enzymes that contain these amino acids.
Disulfide bonds (S-S bonds) formed between cysteine residues in peptide chains are very important in
protein assembly and structure. These covalent bonds between peptide chains confer extra toughness and
rigidity.[37] For example, the high strength of feathers and hair is in part due to their high content of S-S
bonds and their high content of cysteine and sulfur. Eggs are high in sulfur because large amounts of the
element are necessary for feather formation, and the characteristic odor of rotting eggs is due to hydrogen
sulfide. The high disulfide bond content of hair and feathers contributes to their indigestibility, and also to
their characteristic disagreeable odor when burned.

Homocysteine and taurine are other sulfur-containing acids that are similar in structure, but which are not
coded by DNA, and are not part of the primary structure of proteins. Many important cellular enzymes
use prosthetic groups ending with -SH moieties to handle reactions involving acyl-containing
biochemicals: two common examples from basic metabolism are coenzyme A and alpha-lipoic acid.[37]
Sulfur also plays and imporant part as a carrier of reducing hydrogen and its electrons, for cellular repair
of oxidation. Reduced glutathione, a sulfur-containing tripeptide, is a reducing agent through its
sulfhydryl (-SH) moiety derived from cysteine. The thioredoxins are essential classes of small proteins
acting as general reducing agents in cells, that use pairs of reduced cysteines to similar effect.

Methanogenesis, the route to most of the world's methane, is a multistep biochemical transformation of
carbon dioxide. This conversion requires several organosulfur cofactors. These include coenzyme M,
CH3SCH2CH2SO3-, the immediate precursor to methane.[38]

[edit] Metalloproteins and inorganic cofactors

Inorganic sulfur forms a part of iron-sulfur clusters as well as many copper, nickel, and iron proteins.
Most pervasive are the ferrodoxins, which serve as electron shuttles in cells. Nitrogenase, an Fe-Mo-S
cluster, ia a catalyst that converts atmospheric nitrogen to ammonia, required by plants and
microorganisms.[39]

[edit] Sulfur metabolism

Main article: Sulfur assimilation

408
Main article: Sulfur cycle

Sulfur may also serve as energy (chemical food) source for bacteria that use hydrogen sulfide (H2S) in the
place of water as the electron donor in a primitive photosynthesis-like process in which oxygen is the
electron receptor. The photosynthetic green and purple sulfur bacteria and some chemolithotrophs use
elemental oxygen to carry out such oxidization of hydrogen sulfide to produce elemental sulfur (So),
oxidation state = 0. Primitive bacteria which live around deep ocean volcanic vents oxidize hydrogen
sulfide in this way with oxygen: see giant tube worm for an example of large organisms (via bacteria)
making metabolic use of hydrogen sulfide as food to be oxidized.

The so-called sulfur bacteria, by contrast, "breathe sulfate" instead of oxygen. They use sulfur as the
electron acceptor, and reduce various oxidized sulfur compounds back into sulfide, often into hydrogen
sulfide. They also can grow on a number of other partially oxidized sulfur compounds (e. g. thiosulfates,
thionates, polysulfides, sulfites). The hydrogen sulfide produced by these bacteria is responsible for the
smell of some intestinal gases and decomposition products.

Sulfur is absorbed by plants via the roots from soil as the sulfate and transported as a phosphate ester.
Sulfate is reduced to sulfide via sulfite before it is incorporated into cysteine and other organosulfur
compounds.[40]

SO42- → SO32- → H2S → cysteine

[edit] Precautions

Elemental sulfur is non-toxic, but it can burn, producing sulfur dioxide. Although sulfur dioxide is
sufficiently safe to be used as a food additive in small amounts, at high concentrations it harms the lungs,
eyes or other tissues. In organisms without lungs such as insects or plants, it otherwise prevents
respiration. Sulfur trioxide and sulfuric acid are similarly highly corrosive, due to the strong acids that
form on contact with water.

Effect of acid rain on a forest, Jizera Mountains, Czech Republic

The burning of coal and/or petroleum by industry and power plants generates sulfur dioxide (SO2), which
reacts with atmospheric water and oxygen to produce sulfuric acid (H2SO4) and sulfurous acid (H2SO3).
These acids are components of acid rain, which lower the pH of soil and freshwater bodies, sometimes
resulting in substantial damage to the environment and chemical weathering of statues and structures.
Fuel standards increasingly require sulfur to be extracted from fossil fuels to prevent the formation of acid

409
rain. This extracted sulfur is then refined and represents a large portion of sulfur production. In coal fired
power plants, the flue gases are sometimes purified. In more modern power plants that use syngas the
sulfur is extracted before the gas is burned.

Hydrogen sulfide is as toxic as hydrogen cyanide and kills by the same mechanism, although hydrogen
sulfide is less likely to result in surprise poisonings from small inhaled amounts, due to its more
disagreeable warning odor. However, although very pungent at first awareness to the human nose,
hydrogen sulfide quickly deadens the sense of smell, so potential victims breathing larger and larger
quantities of it may be unaware of its presence until severe symptoms occur (these can then quickly lead
to death).

Phosphorus

From Wikipedia, the free encyclopedia

Jump to: navigation, search

This article is about the chemical element. For other uses, see Phosphorus (disambiguation).

silicon ← phosphorus → sulfur

N

P

As

15P

Periodic table

Appearance

colorless, waxy white, yellow, scarlet, red, violet, black

410
waxy white (yellow cut), red (granules center left, chunk center right),
and violet phosphorus

General properties

Name, symbol, number phosphorus, P, 15

Pronunciation /ˈfɒ sfə rə s/ FOS-fə r-ə s

Element category nonmetal

Group, period, block 15, 3, p

Standard atomic weight 30.973762g·mol−1

Electron configuration [Ne] 3s2 3p3

Electrons per shell 2, 8, 5 (Image)

Physical properties

(white) 1.823, (red) ≈ 2.2 – 2.34, (violet) 2.36,


Density (near r.t.)
(black) 2.69 g·cm−3

Melting point (white) 44.2 °C, (black) 610 °C

Sublimation point (red) ≈ 416 – 590 °C, (violet) 620 °C

Boiling point (white) 280.5 °C

Heat of fusion (white) 0.66 kJ·mol−1

Heat of vaporization (white) 12.4 kJ·mol−1

Specific heat capacity (25 °C) (white)

411
23.824 J·mol−1·K−1

Vapor pressure (white)

P (Pa) 1 10 100 1k 10 k 100 k

at T (K) 279 307 342 388 453 549

Vapor pressure (red, bp. 431 °C)

P (Pa) 1 10 100 1k 10 k 100 k

at T (K) 455 489 529 576 635 704

Atomic properties

5, 4, 3, 2[1], 1 [2], -1, -2, -3


Oxidation states
(mildly acidic oxide)

Electronegativity 2.19 (Pauling scale)

Ionization energies 1st: 1011.8 kJ·mol−1


(more)
2nd: 1907 kJ·mol−1

3rd: 2914.1 kJ·mol−1

Covalent radius 107±3 pm

Van der Waals radius 180 pm

Miscellanea

Magnetic ordering (white,red,violet,black) diamagnetic[3]

Thermal conductivity (300 K) (white) 0.236, (black) 12.1 W·m−1·K−1

Bulk modulus (white) 5, (red) 11 GPa

CAS registry number 7723-14-0

412
Most stable isotopes

Main article: Isotopes of phosphorus

iso NA half-life DM DE (MeV) DP

31 31
P 100% P is stable with 16 neutrons

32
P syn 14.28 d β− 1.709 32
S

33
P syn 25.3 d β− 0.249 33
S

v·d·e

Phosphorus ( /ˈfɒsfərəs/ FOS-fər-əs) is the chemical element that has the symbol P and atomic number
15. A multivalent nonmetal of the nitrogen group, phosphorus as a mineral is almost always present in its
maximally oxidized state, as inorganic phosphate rocks. Elemental phosphorus exists in two major forms
– white phosphorus and red phosphorus, but due to its high reactivity, phosphorus is never found as a free
element on Earth.

The first form of elemental phosphorus to be produced (white phosphorus, in 1669) emits a faint glow
upon exposure to oxygen – hence its name given from Greek mythology, Φωσφόρος meaning "light-
bearer" (Latin Lucifer), referring to the "Morning Star", the planet Venus. Although the term
"phosphorescence", meaning glow after illumination, derives from this property of phosphorus, the glow
of phosphorus originates from oxidation of the white (but not red) phosphorus and should be called
chemiluminescence.

Phosphorus compounds are used in explosives, nerve agents, friction matches, fireworks, pesticides,
toothpastes, and detergents.

Phosphorus is a component of DNA, RNA, ATP, and also the phospholipids that form all cell
membranes. It is thus an essential element for all living cells, and organisms tend to accumulate and
concentrate it. For example, elemental phosphorus was historically first isolated from the sediment in
human urine, and bone ash was an important early phosphate source. Low phosphate levels are an
important limit to growth in some aquatic systems. Today, the most important commercial use of
phosphorus-based chemicals is the production of fertilizers, to replace the phosphorus that plants remove
from the soil.

Contents

[hide]

 1 Physical properties
o 1.1 Luminescence
o 1.2 Allotropes
o 1.3 Isotopes

413
 2 Chemical properties
o 2.1 Chemical bonding
o 2.2 Phosphine, diphosphine and phosphonium salts
o 2.3 Halides
o 2.4 Oxides and oxyacids
 3 Spelling and etymology
 4 Stellar nucleosynthesis
 5 History and discovery
 6 Occurrence
 7 Production
 8 Applications
 9 Biological role
 10 Precautions
o 10.1 US DEA List I status
 11 See also
 12 Notes
 13 References
o 13.1 Notes
o 13.2 Sources
 14 External links

Physical properties

Luminescence

In 1669, German alchemist Hennig Brand attempted to create the philosopher's stone from his urine, and
in the process he produced a white material that glowed in the dark.[4] The phosphorus had been produced
from inorganic phosphate, which is a significant component of dissolved urine solids. White phosphorus
is highly reactive and gives off a faint greenish glow upon uniting with oxygen. The glow observed by
Brand was caused by the very slow burning of the phosphorus, but as he neither saw flame nor felt any
heat he did not recognize it as burning.

It was known from early times that the glow would persist for a time in a stoppered jar but then cease.
Robert Boyle in the 1680s ascribed it to "debilitation" of the air; in fact, it is oxygen being consumed. By
the 18th century, it was known that in pure oxygen, phosphorus does not glow at all; [5] there is only a
range of partial pressure at which it does. Heat can be applied to drive the reaction at higher pressures.[6]

In 1974, the glow was explained by R. J. van Zee and A. U. Khan. [7] A reaction with oxygen takes place
at the surface of the solid (or liquid) phosphorus, forming the short-lived molecules HPO and P2O2 that
both emit visible light. The reaction is slow and only very little of the intermediates are required to
produce the luminescence, hence the extended time the glow continues in a stoppered jar.

Although the term phosphorescence is derived from phosphorus, the reaction that gives phosphorus its
glow is properly called chemiluminescence (glowing due to a cold chemical reaction), not
phosphorescence (re-emitting light that previously fell onto a substance and excited it).

Allotropes

414
Main article: Allotropes of phosphorus

P4 molecule

Phosphorus has several forms (allotropes) that have strikingly different properties.[8] The two most
common allotropes are white phosphorus and red phosphorus. Red phosphorus is an intermediate phase
between white and violet phosphorus. Another form, scarlet phosphorus, is obtained by allowing a
solution of white phosphorus in carbon disulfide to evaporate in sunlight. Black phosphorus is obtained
by heating white phosphorus under high pressures (about 12,000 standard atmospheres or 1.2 GPa). In
appearance, properties, and structure, it resembles graphite, being black and flaky, a conductor of
electricity, and has puckered sheets of linked atoms. Another allotrope is diphosphorus; it contains a
phosphorus dimer as a structural unit and is highly reactive.[9]

P4O10 molecule

White phosphorus has two forms, low-temperature β form and high-temperature α form. They both
contain a phosphorus P4 tetrahedron as a structural unit, in which each atom is bound to the other three
atoms by a single bond. This P4 tetrahedron is also present in liquid and gaseous phosphorus up to the
temperature of 800 °C when it starts decomposing to P2 molecules.[10] White phosphorus is the least
stable, the most reactive, more volatile, less dense, and more toxic than the other allotropes. The toxicity
of white phosphorus led to its discontinued use in matches.[11] White phosphorus is thermodynamically
unstable at normal condition and will gradually change to red phosphorus. This transformation, which is
accelerated by light and heat, makes white phosphorus almost always contain some red phosphorus and
therefore appear yellow. For this reason, it is also called yellow phosphorus. It glows greenish in the dark
(when exposed to oxygen), is highly flammable and pyrophoric (self-igniting) upon contact with air as
well as toxic (causing severe liver damage on ingestion). Because of pyrophoricity, white phosphorus is
used as an additive in napalm. The odour of combustion of this form has a characteristic garlic smell, and
samples are commonly coated with white "(di)phosphorus pentoxide", which consists of P4O10 tetrahedra
with oxygen inserted between the phosphorus atoms and at their vertices. White phosphorus is insoluble
in water but soluble in carbon disulfide.[12]

The white allotrope can be produced using several different methods. In one process, calcium phosphate,
which is derived from phosphate rock, is heated in an electric or fuel-fired furnace in the presence of

415
carbon and silica.[13] Elemental phosphorus is then liberated as a vapour and can be collected under
phosphoric acid. This process is similar to the first synthesis of phosphorus from calcium phosphate in
urine.

Crystal structure of red phosphorus

In the red phosphorus, one of the P4 bonds is broken, and one additional bond is formed with a
neighbouring tetrahedron resulting in a more chain-like structure. Red phosphorus may be formed by
heating white phosphorus to 250 °C (482 °F) or by exposing white phosphorus to sunlight.[4] Phosphorus
after this treatment exists as an amorphous network of atoms that reduces strain and gives greater
stability; further heating results in the red phosphorus becoming crystalline. Therefore red phosphorus is
not a certain allotrope, but rather an intermediate phase between the white and violet phosphorus, and
most of its properties have a range of values. Red phosphorus does not catch fire in air at temperatures
below 260 °C, whereas white phosphorus ignites at about 30 °C.[14]

Violet phosphorus is a thermodynamic stable form of phosphorus that can be produced by day-long
temper of red phosphorus above 550 °C. In 1865, Hittorf discovered that when phosphorus was
recrystallized from molten lead, a red/purple form is obtained. Therefore this form is sometimes known as
"Hittorf's phosphorus" (or violet or α-metallic phosphorus).[9]

Crystal structure of black phosphorus

Black phosphorus is the least reactive allotrope and the thermodynamic stable form below 550 °C. It is
also known as β-metallic phosphorus and has a structure somewhat resembling that of graphite.[15][16] High
pressures are usually required to produce black phosphorus, but it can also be produced at ambient
conditions using metal salts as catalysts.[17]

416
The diphosphorus allotrope, P2, is stable only at high temperatures. The dimeric unit contains a triple
bond and is analogous to N2. The diphosphorus allotrope (P2) can be obtained normally only under
extreme conditions (for example, from P4 at 1100 kelvin). Nevertheless, some advancements were
obtained in generating the diatomic molecule in homogeneous solution, under normal conditions with the
use by some transitional metal complexes (based on, for example, tungsten and niobium).[18]

Properties of some allotropes of phosphorus[8][9]

Form white(α) white(β) violet black

Symmetry Body-centred cubic Triclinic Monoclinic Orthorhombic

Pearson symbol aP24 mP84 oS8

Space group I43m P1 No.2 P2/c No.13 Cmca No.64

Density (g/cm3) 1.828 1.88 2.36 2.69

Bandgap (eV) 2.1 1.5 0.34

Refractive index 1.8244 2.6 2.4

Isotopes

Main article: Isotopes of phosphorus

Although twenty-three isotopes of phosphorus are known[19] (all possibilities from 24P up to 46P), only 31P,
with spin 1⁄2, is stable and is therefore present at 100% abundance. The half-integer spin and high
abundance of 31P make phosphorus-31_NMR a very useful tool in studies of biomolecules, particularly
DNA.

Two radioactive isotopes of phosphorus have half-lives that make them useful for scientific experiments.
32
P has a half-life of 14.262 days and 33P has a half-life of 25.34 days. Biomolecules can be "tagged" with
a radioisotope to allow for the study of very dilute samples.

Radioactive isotopes of phosphorus include

 32
P, a beta-emitter (1.71 MeV) with a half-life of 14.3 days, which is used routinely in life-science
laboratories, primarily to produce radiolabeled DNA and RNA probes, e.g. for use in Northern
blots or Southern blots. Because the high energy beta particles produced penetrate skin and
corneas, and because any 32P ingested, inhaled, or absorbed is readily incorporated into bone and
nucleic acids, Occupational Safety and Health Administration in the United States, and similar
institutions in other developed countries require that a lab coat, disposable gloves and safety
glasses or goggles be worn when working with 32P, and that working directly over an open
container be avoided in order to protect the eyes. Monitoring personal, clothing, and surface
contamination is also required. In addition, due to the high energy of the beta particles, shielding
this radiation with the normally used dense materials (e.g. lead), gives rise to secondary emission

417
of X-rays via a process known as Bremsstrahlung, meaning braking radiation. Therefore shielding
must be accomplished with low density materials, e.g. Plexiglas (Lucite), other plastics, water, or
(when transparency is not required), even wood.[20]
 33
P, a beta-emitter (0.25 MeV) with a half-life of 25.4 days. It is used in life-science laboratories
in applications in which lower energy beta emissions are advantageous such as DNA sequencing.

Chemical properties

This section needs additional citations for verification.


Please help improve this article by adding reliable references. Unsourced material may be
challenged and removed. (September 2010)

See also: Category:Phosphorus compounds

 Hydrides: PH3, P2H4


 Halides: PBr5, PBr3, PCl3, PI3
 Oxides:P4O6, P4O10
 Sulfides: P4S3, P4S10
 Acids: H3PO2, H3PO3, H3PO4
 Phosphates: (NH4)3PO4, Ca3(PO4)2, FePO4, Fe3(PO4)2, Na3PO4, Ca(H2PO4)2, KH2PO4
 Phosphides: Ca3P2, GaP, Zn3P2 Cu3P
 Organophosphorus and organophosphates: Lawesson's reagent, Parathion, Sarin, Soman, Tabun,
Triphenyl phosphine, VX nerve gas

Chemical bonding

For more details on this topic, see Octet rule.

Because phosphorus is just below nitrogen in the periodic table, the two elements share many of their
bonding characteristics. For instance, phosphine, PH3, is an analogue of ammonia, NH3. Phosphorus, like
nitrogen, is trivalent in this molecule.

The "trivalent" or simple 3-bond view is the pre-quantum mechanical Lewis structure, which although
somewhat of a simplification from a quantum chemical point of view, illustrates some of the
distinguishing chemistry of the element. In quantum chemical valence bond theory, the valence electrons
are seen to be in mixtures of four s and p atomic orbitals, so-called hybrids. In this view, the three
unpaired electrons in the three 3p orbitals combine with the two electrons in the 3s orbital to form three
electron pairs of opposite spin, available for the formation of three bonds. The remaining hybrid orbital
contains two paired non-bonding electrons, which show as a lone pair in the Lewis structure.

The phosphorus cation is very similar to the nitrogen cation. In the same way that nitrogen forms the
tetravalent ammonium ion, phosphorus can form the tetravalent phosphonium ion, and form salts such as
phosphonium iodide [PH4]+[I– ].

Like other elements in the third or lower rows of the periodic table, phosphorus atoms can expand their
valence to make penta- and hexavalent compounds. The phosphorus chloride molecule is an example.
When the phosphorus ligands are not identical, the more electronegative ligands are located in the apical
positions and the least electronegative ligands are located in the axial positions.

418
With strongly electronegative ions, in particular fluorine, hexavalency as in PF6– occurs as well. This
octahedral ion is isoelectronic with SF6. In the bonding the six octahedral sp3d2 hybrid atomic orbitals
play an important role.

Before extensive computer calculations were feasible, it was generally assumed that the nearby d orbitals
in the n = 3 shell were the obvious cause of the difference in binding between nitrogen and phosphorus
(i.e., phosphorus had 3d orbitals available for 3s and 3p shell bonding electron hybridisation, but nitrogen
did not). However, in the early eighties the German theoretical chemist Werner Kutzelnigg[21] found from
an analysis of computer calculations that the difference in binding is more likely due to differences in
character between the valence 2p and valence 3p orbitals of nitrogen and phosphorus, respectively. The 2s
and 2p orbitals of first row atoms are localized in roughly the same region of space, while the 3p orbitals
of phosphorus are much more extended in space. The violation of the octet rule observed in compounds of
phosphorus is then due to the size of the phosphorus atom, and the corresponding reduction of steric
hindrance between its ligands. In modern theoretical chemistry, Kutzelnigg's analysis is generally
accepted.

The simple Lewis structure for the trigonal bipyramidal PCl5 molecule contains five covalent bonds,
implying a hypervalent molecule with ten valence electrons contrary to the octet rule.

An alternate description of the bonding, however, respects the octet rule by using 3-centre-4-electron (3c-
4e) bonds. In this model, the octet on the P atom corresponds to six electrons, which form three Lewis
(2c-2e) bonds to the three equatorial Cl atoms, plus the two electrons in the 3-centre Cl-P-Cl bonding
molecular orbital for the two axial Cl electrons. The two electrons in the corresponding nonbonding
molecular orbital are not included because this orbital is localized on the two Cl atoms and does not
contribute to the electron density on the phosphorus atom. (However, it should always be remembered
that the octet rule is not some universal rule of chemical bonding, and while many compounds obey it,
there are many elements to which it does not apply).

Phosphine, diphosphine and phosphonium salts

Phosphine (PH3) and arsine (AsH3) are structural analogues with ammonia (NH3) and form pyramidal
structures with the phosphorus or arsenic atom in the centre bound to three hydrogen atoms and one lone
electron pair. Both are colourless, ill-smelling, toxic compounds. Phosphine is produced in a manner
similar to the production of ammonia. Hydrolysis of calcium phosphide, Ca3P2, or calcium nitride, Ca3N2
produces phosphine or ammonia, respectively. Unlike ammonia, phosphine is unstable and it reacts
instantly with air giving off phosphoric acid clouds. Arsine is even less stable. Although phosphine is less
basic than ammonia, it can form some phosphonium salts (like PH4I), analogues of ammonium salts, but
these salts immediately decompose in water and do not yield phosphonium (PH4+) ions. Diphosphine
(P2H4 or H2P-PH2) is an analogue of hydrazine (N2H4) that is a colourless liquid that spontaneously
ignites in air and can disproportionate into phosphine and complex hydrides.

Halides

The trihalides PF3, PCl3, PBr3 and PI3 and the pentahalides, PCl5 and PBr5 are all known and mixed
halides can also be formed. The trihalides can be formed simply by mixing the appropriate stoichiometric
amounts of phosphorus and a halogen.[citation needed] For safety reasons, however, PF3 is typically made by
reacting PCl3 with AsF5 and fractional distillation because the direct reaction of phosphorus with fluorine
can be explosive. The pentahalides, PX5, are synthesized by reacting excess halogen with either elemental
phosphorus or with the corresponding trihalide. Mixed phosphorus halides are unstable and decompose to
form simple halides. Thus 5PF3Br2 decomposes into 3PF5 and 2PBr5.[citation needed]

419
Oxides and oxyacids

Phosphorus(III) oxide, P4O6 (also called tetraphosphorus hexoxide) and phosphorus(V) oxide, P4O10 (or
tetraphosphorus decoxide) are acid anhydrides of phosphorus oxyacids and hence readily react with
water. P4O10 is a particularly good dehydrating agent that can even remove water from nitric acid, HNO3.
The structure of P4O6 is like that of P4 with an oxygen atom inserted between each of the P-P bonds. The
structure of P4O10 is like that of P4O6 with the addition of one oxygen bond to each phosphorus atom via a
double bond and protruding away from the tetrahedral structure.

Phosphorous oxyacids can have acidic protons bound to oxygen atoms and nonacidic protons that are
bonded directly to the phosphorus atom. Although many oxyacids of phosphorus are formed, only six are
important (see table), and three of them, hypophosphorous acid, phosphorous acid and phosphoric acid
are particularly important ones.

Oxidation state Formula Name Acidic protons Compounds

+1 H3PO2 hypophosphorous acid 1 acid, salts

+3 H3PO3 (ortho)phosphorous acid 2 acid, salts

+5 (HPO3)n metaphosphoric acids n salts (n=3,4)

+5 H5P3O10 triphosphoric acid 3 salts

+5 H4P2O7 pyrophosphoric acid 4 acid, salts

+5 H3PO4 (ortho)phosphoric acid 3 acid, salts

Spelling and etymology

The name Phosphorus in Ancient Greece was the name for the planet Venus and is derived from the
Greek words (φως = light, φέρω = carry), which roughly translates as light-bringer or light carrier.[4] (In
Greek mythology and tradition, Augerinus (Αυγερινός = morning star, in use until today), Hesperus or
Hesperinus (΄Εσπερος or Εσπερινός or Αποσπερίτης = evening star, in use until today) and Eosphorus
(Εωσφόρος = dawnbearer, not in use for the planet after Christianity) are close homologues, and also
associated with Phosphorus-the-planet).

According to the Oxford English Dictionary, the correct spelling of the element is phosphorus. The word
phosphorous is the adjectival form of the P3+ valence: so, just as sulfur forms sulfurous and sulfuric
compounds, phosphorus forms phosphorous compounds (see, e.g., phosphorous acid) and P5+ valency
phosphoric compounds (see, e.g., phosphoric acids and phosphates).

Stellar nucleosynthesis

Stable forms of phosphorus are produced in large (greater than 3 solar masses) stars by fusing two oxygen
atoms together.[citation needed] This requires temperatures above 1,000 megakelvins.

420
History and discovery

The discovery of phosphorus is credited to the German alchemist Hennig Brand in 1669, although other
chemists might have discovered phosphorus around the same time.[22] Brand experimented with urine,
which contains considerable quantities of dissolved phosphates from normal metabolism. [4] Working in
Hamburg, Brand attempted to create the fabled philosopher's stone through the distillation of some salts
by evaporating urine, and in the process produced a white material that glowed in the dark and burned
brilliantly. It was named phosphorus mirabilis ("miraculous bearer of light").[23] His process originally
involved letting urine stand for days until it gave off a terrible smell. Then he boiled it down to a paste,
heated this paste to a high temperature, and led the vapours through water, where he hoped they would
condense to gold. Instead, he obtained a white, waxy substance that glowed in the dark. Brand had
discovered phosphorus, the first element discovered since antiquity. We now know that Brand produced
ammonium sodium hydrogen phosphate, (NH4)NaHPO4. While the quantities were essentially correct (it
took about 1,100 L of urine to make about 60 g of phosphorus), it was unnecessary to allow the urine to
rot. Later scientists would discover that fresh urine yielded the same amount of phosphorus.

Since that time, phosphors and phosphorescence were used loosely to describe substances that shine in
the dark without burning. However, as mentioned above, even though the term phosphorescence was
originally coined as a term by analogy with the glow from oxidation of elemental phosphorus, is now
reserved for another fundamentally different process—re-emission of light after illumination.

Brand at first tried to keep the method secret,[24] but later sold the recipe for 200 thaler to D Krafft from
Dresden,[4] who could now make it as well, and toured much of Europe with it, including England, where
he met with Robert Boyle. The secret that it was made from urine leaked out and first Johann Kunckel
(1630–1703) in Sweden (1678) and later Boyle in London (1680) also managed to make phosphorus.
Boyle states that Krafft gave him no information as to the preparation of phosphorus other than that it was
derived from "somewhat that belonged to the body of man". This gave Boyle a valuable clue, however, so
that he, too, managed to make phosphorus, and published the method of its manufacture. [4] Later he
improved Brand's process by using sand in the reaction (still using urine as base material),

4 NaPO3 + 2 SiO2 + 10 C → 2 Na2SiO3 + 10 CO + P4

Robert Boyle was the first to use phosphorus to ignite sulfur-tipped wooden splints, forerunners of our
modern matches, in 1680.

In 1769 Johan Gottlieb Gahn and Carl Wilhelm Scheele showed that calcium phosphate (Ca3(PO4)2) is
found in bones, and they obtained phosphorus from bone ash. Antoine Lavoisier recognized phosphorus
as an element in 1777.[25] Bone ash was the major source of phosphorus until the 1840s. Phosphate rock, a
mineral containing calcium phosphate, was first used in 1850 and following the introduction of the
electric arc furnace in 1890, this became the only source of phosphorus. Phosphorus, phosphates and
phosphoric acid are still obtained from phosphate rock. Phosphate rock is a major feedstock in the
fertilizer industry.

Early matches used white phosphorus in their composition, which was dangerous due to its toxicity.
Murders, suicides and accidental poisonings resulted from its use. (An apocryphal tale tells of a woman
attempting to murder her husband with white phosphorus in his food, which was detected by the stew
giving off luminous steam).[7] In addition, exposure to the vapours gave match workers a severe necrosis
of the bones of the jaw, the infamous "phossy jaw". When a safe process for manufacturing red
phosphorus was discovered, with its far lower flammability and toxicity, laws were enacted, under the
Berne Convention (1906), requiring its adoption as a safer alternative for match manufacture.[12]

421
Ironically, the Allies used phosphorus incendiary bombs in World War II to destroy Hamburg, the place
where the "miraculous bearer of light" was first discovered.[23]

Occurrence

See also: Category:Phosphate minerals

Due to its reactivity with air and many other oxygen-containing substances, phosphorus is not found free
in nature but it is widely distributed in many different minerals.

Phosphate rock, which is partially made of apatite (an impure tri-calcium phosphate mineral), is an
important commercial source of this element. About 50 percent of the global phosphorus reserves are in
the Arab nations.[26] Large deposits of apatite are located in China, Russia, Morocco, Florida, Idaho,
Tennessee, Utah, and elsewhere. Albright and Wilson in the United Kingdom and their Niagara Falls
plant, for instance, were using phosphate rock in the 1890s and 1900s from Connetable, Tennessee and
Florida; by 1950 they were using phosphate rock mainly from Tennessee and North Africa.[13] In the early
1990s Albright and Wilson's purified wet phosphoric acid business was being adversely affected by
phosphate rock sales by China and the entry of their long-standing Moroccan phosphate suppliers into the
purified wet phosphoric acid business.[27]

In 2007, at the current rate of consumption, the supply of phosphorus was estimated to run out in 345
years.[28] However, scientists are now claiming that a "Peak Phosphorus" will occur in 30 years and that
"At current rates, reserves will be depleted in the next 50 to 100 years."[29]

The stability of the +5 oxidation state is illlustrated by the wide range of phosphate materials available in
the earth.

Production

White phosphorus was first made commercially, for the match industry in the 19th century, by distilling
off phosphorus vapour from precipitated phosphates, mixed with ground coal or charcoal, which was
heated in an iron pot, in retort.[30] The precipitated phosphates were made from ground-up bones that had
been de-greased and treated with strong acids. Carbon monoxide and other flammable gases produced
during the reduction process were burnt off in a flare stack.

This process became obsolete when the submerged-arc furnace for phosphorus production was introduced
to reduce phosphate rock.[31][32] Calcium phosphate (phosphate rock), mostly mined in Florida and North
Africa, can be heated to 1,200–1,500 °C with sand, which is mostly SiO2, and coke (impure carbon) to
produce vaporized tetraphosphorus, P4, (melting point 44.2 °C), which is subsequently condensed into a
white powder under water to prevent oxidation. Even under water, white phosphorus is slowly converted
to the more stable red phosphorus allotrope (melting point 597 °C). Both the white and red allotropes of
phosphorus are insoluble in water.

The electric furnace method allowed production to increase to the point where phosphorus could be used
in weapons of war.[7][13] In World War I it was used in incendiaries, smoke screens and tracer bullets.[13] A
special incendiary bullet was developed to shoot at hydrogen-filled Zeppelins over Britain (hydrogen
being highly inflammable if it can be ignited).[13] During World War II, Molotov cocktails of benzene and
phosphorus were distributed in Britain to specially selected civilians within the British resistance
operation, for defence; and phosphorus incendiary bombs were used in war on a large scale. Burning

422
phosphorus is difficult to extinguish and if it splashes onto human skin it has horrific effects (see
precautions below).[12]

Today phosphorus production is larger than ever. It is used as a precursor for various chemicals, [33] in
particular the herbicide glyphosate sold under the brand name Roundup. Production of white phosphorus
takes place at large facilities and it is transported heated in liquid form. Some major accidents have
occurred during transportation, train derailments at Brownston, Nebraska and Miamisburg, Ohio led to
large fires. The worst accident in recent times was an environmental one in 1968 when phosphorus spilled
into the sea from a plant at Placentia Bay, Newfoundland.[34] Thermphos International is Europe's only
producer of elemental phosphorus.

Applications

Match striking surface made of a mixture of red phosphorus, glue and ground glass. The glass powder is
used to increase the friction.

Widely used compounds Use

Ca(H2PO4)2·H2O Baking powder and fertilizers

CaHPO4·2H2O Animal food additive, toothpowder

H3PO4 Manufacture of phosphate fertilizers

PCl3 Manufacture of POCl3 and pesticides

POCl3 Manufacturing plasticizer

P4S10 Manufacturing of additives and pesticides

Na5P3O10 Detergents

Phosphorus, being an essential plant nutrient, finds its major use as a constituent of fertilizers for
agriculture and farm production in the form of concentrated phosphoric acids, which can consist of 70%
to 75% P2O5. Global demand for fertilizers led to large increase in phosphate (PO43–) production in the

423
second half of the 20th century. Due to the essential nature of phosphorus to living organisms, the low
solubility of natural phosphorus-containing compounds, and the slow natural cycle of phosphorus, the
agricultural industry is heavily reliant on fertilizers that contain phosphate, mostly in the form of
superphosphate of lime. Superphosphate of lime is a mixture of two phosphate salts, calcium dihydrogen
phosphate Ca(H2PO4)2 and calcium sulfate dihydrate CaSO4·2H2O produced by the reaction of sulfuric
acid and water with calcium phosphate.

 Phosphorus is widely used to make organophosphorus compounds, through the intermediates


phosphorus chlorides and two phosphorus sulfides: phosphorus pentasulfide, and phosphorus
sesquisulfide.[13] Organophosphorus compounds have many applications, including in plasticizers,
flame retardants, pesticides, extraction agents, and water treatment.[12]
 Phosphorus is also an important component in steel production, in the making of phosphor
bronze, and in many other related products. Phosphorus is added to metallic copper during its
smelting process to react with oxygen present as an impurity in copper and to produce oxygen-
free copper or phosphorus-containing copper (CuOFP) alloys with a higher thermal and electrical
conductivity than normal copper.
 Phosphates are utilized in the making of special glasses that are used for sodium lamps.[35]
 Bone-ash, calcium phosphate, is used in the production of fine china.[35]
 Sodium tripolyphosphate made from phosphoric acid is used in laundry detergents in some
countries, but banned for this use in others.[35]
 Phosphoric acid made from elemental phosphorus is used in food applications such as some soda
beverages. The acid is also a starting point to make food grade phosphates.[13] These include
mono-calcium phosphate that is employed in baking powder and sodium tripolyphosphate and
other sodium phosphates.[13] Among other uses these are used to improve the characteristics of
processed meat and cheese. Others are used in toothpaste.[13] Trisodium phosphate is used in
cleaning agents to soften water and for preventing pipe/boiler tube corrosion.
 White phosphorus, called "WP" (slang term "Willie Peter") is used in military applications as
incendiary bombs, for smoke-screening as smoke pots and smoke bombs, and in tracer
ammunition. It is also a part of an obsolete M34 White Phosphorus US hand grenade. This
multipurpose grenade was mostly used for signalling, smoke screens and inflammation; it could
also cause severe burns and had a psychological impact on the enemy.[36][37]
 Red phosphorus is essential for manufacturing matchbook strikers, flares,[13] safety matches,
pharmaceutical grade and street methamphetamine, and is used in cap gun caps.
 Phosphorus sesquisulfide is used in heads of strike-anywhere matches.[13]
 In trace amounts, phosphorus is used as a dopant for n-type semiconductors.
 32
P and 33P are used as radioactive tracers in biochemical laboratories (see Isotopes).
 Phosphate is a strong complexing agent for the hexavalent uranyl (UO22+) species and this is the
reason why apatite and other natural phosphates can often be very rich in uranium.
 Tributylphosphate is an organophosphate soluble in kerosene and used to extract uranium in the
Purex process applied in the reprocessing of spent nuclear fuel.

Biological role

Phosphorus is a key element in all known forms of life.[38] Inorganic phosphorus in the form of the
phosphate PO43– plays a major role in biological molecules such as DNA and RNA where it forms part of
the structural framework of these molecules. Living cells also use phosphate to transport cellular energy
in the form of adenosine triphosphate (ATP). Nearly every cellular process that uses energy obtains it in
the form of ATP. ATP is also important for phosphorylation, a key regulatory event in cells.
Phospholipids are the main structural components of all cellular membranes. Calcium phosphate salts
assist in stiffening bones.[12]

424
Every cell has a membrane that separates it from its surrounding environment. Biological membranes are
made from a phospholipid matrix and proteins, typically in the form of a bilayer. Phospholipids are
derived from glycerol, such that two of the glycerol hydroxyl (OH) protons have been replaced with fatty
acids as an ester, and the third hydroxyl proton has been replaced with phosphate bonded to another
alcohol.[12]

An average adult human contains about 0.7 kg of phosphorus, about 85-90% of which is present in bones
and teeth in the form of apatite, and the remainder in soft tissues and extracellular fluids (~1%). The
phosphorus content increases from about 0.5 weight% in infancy to 0.65-1.1 weight% in adults. Average
phosphorus concentration in the blood is about 0.4 g/L, about 70% of that is organic and 30% inorganic
phosphates.[39] A well-fed adult in the industrialized world consumes and excretes about 1-3 g of
phosphorus per day, with consumption in the form of inorganic phosphate and phosphorus-containing
biomolecules such as nucleic acids and phospholipids; and excretion almost exclusively in the form of
urine phosphate ion. Only about 0.1% of body phosphate circulates in the blood, but this amount reflects
the amount of phosphate available to soft tissue cells.[12]

In medicine, low-phosphate syndromes are caused by malnutrition, by failure to absorb phosphate, and by
metabolic syndromes that draw phosphate from the blood (such as re-feeding after malnutrition) or pass
too much of it into the urine. All are characterized by hypophosphatemia (see article for medical details),
which is a condition of low levels of soluble phosphate levels in the blood serum, and therefore inside
cells. Symptoms of hypophosphatemia include muscle and neurological dysfunction, and disruption of
muscle and blood cells due to lack of ATP. Too much phosphate can lead to diarrhoea and calcification
(hardening) of organs and soft tissue, and can interfere with the body's ability to use iron, calcium,
magnesium, and zinc.[40]

Phosphorus is an essential macromineral for plants, which is studied extensively in edaphology in order to
understand plant uptake from soil systems. In ecological terms, phosphorus is often a limiting factor in
many environments; i.e. the availability of phosphorus governs the rate of growth of many organisms. In
ecosystems an excess of phosphorus can be problematic, especially in aquatic systems, see eutrophication
and algal blooms.

Although phosphorus is necessary for all forms of life, one species of bacterium GFAJ-1 may be able to
substitute arsenic for phosphorus in some biomolecules, including DNA.[41]

Precautions

425
Organic compounds of phosphorus form a wide class of materials, some of which are extremely toxic.
Fluorophosphate esters are among the most potent neurotoxins known. A wide range of
organophosphorus compounds are used for their toxicity to certain organisms as pesticides (herbicides,
insecticides, fungicides, etc.) and weaponised as nerve agents. Most inorganic phosphates are relatively
nontoxic and essential nutrients. For environmentally adverse effects of phosphates see eutrophication
and algal blooms.[12]

The white phosphorus allotrope should be kept under water at all times as it presents a significant fire
hazard due to its extreme reactivity with atmospheric oxygen, and it should only be manipulated with
forceps since contact with skin can cause severe burns. Chronic white phosphorus poisoning leads to
necrosis of the jaw called "phossy jaw". Ingestion of white phosphorus may cause a medical condition
known as "Smoking Stool Syndrome".[42]

When the white form is exposed to sunlight or when it is heated in its own vapour to 250 °C, it is
transmuted to the red form, which does not chemoluminesce in air. The red allotrope does not
spontaneously ignite in air and is not as dangerous as the white form. Nevertheless, it should be handled
with care because it reverts to white phosphorus in some temperature ranges and it also emits highly toxic
fumes that consist of phosphorus oxides when it is heated.[12]

Phosphorus explosion

Upon exposure to elemental phosphorus, in the past it was suggested to wash the affected area with 2%
copper sulfate solution to form harmless compounds that can be washed away. According to the recent
US Navy's Treatment of Chemical Agent Casualties and Conventional Military Chemical Injuries: FM8-
285: Part 2 Conventional Military Chemical Injuries, "Cupric (copper(II)) sulfate has been used by U.S.
personnel in the past and is still being used by some nations. However, copper sulfate is toxic and its use
will be discontinued. Copper sulfate may produce kidney and cerebral toxicity as well as intravascular
hemolysis."[43]

The manual suggests instead "a bicarbonate solution to neutralize phosphoric acid, which will then allow
removal of visible white phosphorus. Particles often can be located by their emission of smoke when air
strikes them, or by their phosphorescence in the dark. In dark surroundings, fragments are seen as
luminescent spots. Promptly debride the burn if the patient's condition will permit removal of bits of WP
(white phosphorus) that might be absorbed later and possibly produce systemic poisoning. DO NOT
apply oily-based ointments until it is certain that all WP has been removed. Following complete removal
of the particles, treat the lesions as thermal burns."[note 1] As white phosphorus readily mixes with oils, any

426
oily substances or ointments are not recommended until the area is thoroughly cleaned and all white
phosphorus removed.

US DEA List I status

Phosphorus can reduce elemental iodine to hydroiodic acid, which is a reagent effective for reducing
ephedrine or pseudoephedrine to methamphetamine.[44] For this reason, two allotropes of elemental
phosphorus—red phosphorus and white phosphorus—were designated by the United States Drug
Enforcement Administration as List I precursor chemicals under 21 CFR 1310.02 effective on November
17, 2001.[45] As a result, in the United States, handlers of red phosphorus or white phosphorus are subject
to stringent regulatory controls pursuant to the Controlled Substances Act in order to reduce diversion of
these substances for use in clandestine production of controlled substances.

Carbon

From Wikipedia, the free encyclopedia

Jump to: navigation, search

For other uses, see Carbon (disambiguation).

boron ← carbon → nitrogen

-

C

Si

6C

Periodic table

Appearance

clear (diamond) & black (graphite)

427
Spectral lines of Carbon

General properties

Name, symbol, number carbon, C, 6

Pronunciation /ˈkɑrbən/

Element category nonmetal

Group, period, block 14, 2, p

Standard atomic weight 12.0107(8)g·mol−1

Electron configuration 1s2 2s2 2p2 or [He] 2s2 2p2

Electrons per shell 2,4 (Image)

Physical properties

Phase Solid

Density (near r.t.) amorphous:[1] 1.8–2.1 g·cm−3

Density (near r.t.) graphite: 2.267 g·cm−3

Density (near r.t.) diamond: 3.515 g·cm−3

Sublimation point 3915 K3642 ° ,C6588 ° ,F

Triple point 4600 K (4327°C), 10800[2][3] kPa

428
Heat of fusion 117 (graphite) kJ·mol−1

(25 °C) 8.517(graphite),


Specific heat capacity −1 −1
6.155(diamond) J·mol ·K

Atomic properties

Oxidation states 4, 3 [4], 2, 1 [5], 0, -1, -2, -3, -4[6]

Electronegativity 2.55 (Pauling scale)

Ionization energies 1st: 1086.5 kJ·mol−1


(more)
2nd: 2352.6 kJ·mol−1

3rd: 4620.5 kJ·mol−1

Covalent radius 77(sp³), 73(sp²), 69(sp) pm

Van der Waals radius 170 pm

Miscellanea

Magnetic ordering diamagnetic[7]

(300 K) 119-165 (graphite)


Thermal conductivity −1 −1
900-2300 (diamond) W·m ·K

Thermal expansion (25 °C) 0.8 (diamond) [8] µm·m−1·K−1

Speed of sound (thin rod) (20 °C) 18350 (diamond) m/s

Young's modulus 1050 (diamond) [8] GPa

Shear modulus 478 (diamond) [8] GPa

Bulk modulus 442 (diamond) [8] GPa

Poisson ratio 0.1 (diamond) [8]

429
1-2 (Graphite)
Mohs hardness
10 (Diamond)

CAS registry number 7440-44-0

Most stable isotopes

Main article: Isotopes of carbon

DP
iso NA half-life DM DE (MeV)
15
12 12
C 98.9% C is stable with 6 neutrons

13 13
C 1.1% C is stable with 7 neutrons
14
C trace 5730 y β- 0.156 14
N

v·d·e

Carbon ( /ˈkɑrbən/) is the chemical element with symbol C and atomic number 6. As a member of
group 14 on the periodic table, it is nonmetallic and tetravalent—making four electrons available to form
covalent chemical bonds. There are three naturally occurring isotopes, with 12C and 13C being stable,
while 14C is radioactive, decaying with a half-life of about 5730 years.[9] Carbon is one of the few
elements known since antiquity.[10][11] The name "carbon" comes from Latin carbo, coal.

There are several allotropes of carbon of which the best known are graphite, diamond, and amorphous
carbon.[12] The physical properties of carbon vary widely with the allotropic form. For example, diamond
is highly transparent, while graphite is opaque and black. Diamond is among the hardest materials known,
while graphite is soft enough to form a streak on paper (hence its name, from the Greek word "to write").
Diamond has a very low electrical conductivity, while graphite is a very good conductor. Under normal
conditions, diamond has the highest thermal conductivity of all known materials. All the allotropic forms
are solids under normal conditions but graphite is the most thermodynamically stable.

All forms of carbon are highly stable, requiring high temperature to react even with oxygen. The most
common oxidation state of carbon in inorganic compounds is +4, while +2 is found in carbon monoxide
and other transition metal carbonyl complexes. The largest sources of inorganic carbon are limestones,
dolomites and carbon dioxide, but significant quantities occur in organic deposits of coal, peat, oil and
methane clathrates. Carbon forms more compounds than any other element, with almost ten million pure
organic compounds described to date, which in turn are a tiny fraction of such compounds that are
theoretically possible under standard conditions.[13]

Carbon is the 15th most abundant element in the Earth's crust, and the fourth most abundant element in
the universe by mass after hydrogen, helium, and oxygen. It is present in all known lifeforms, and in the
human body carbon is the second most abundant element by mass (about 18.5%) after oxygen. [14] This
abundance, together with the unique diversity of organic compounds and their unusual polymer-forming

430
ability at the temperatures commonly encountered on Earth, make this element the chemical basis of all
known life.

Contents

[hide]

 1 Characteristics
o 1.1 Allotropes
o 1.2 Occurrence
o 1.3 Isotopes
o 1.4 Formation in stars
o 1.5 Carbon cycle
 2 Compounds
o 2.1 Organic compounds
o 2.2 Inorganic compounds
o 2.3 Organometallic compounds
 3 History and etymology
 4 Production
o 4.1 Graphite
o 4.2 Diamond
 5 Applications
o 5.1 Diamonds
 6 Precautions
 7 See also
 8 Bonding to Carbon
 9 References
 10 External links

Characteristics

Theoretically predicted phase diagram of carbon

The different forms or allotropes of carbon (see below) include the hardest naturally occurring substance,
diamond, and also one of the softest known substances, graphite. Moreover, it has an affinity for bonding
with other small atoms, including other carbon atoms, and is capable of forming multiple stable covalent

431
bonds with such atoms. As a result, carbon is known to form almost ten million different compounds; the
large majority of all chemical compounds.[13] Carbon also has the highest melting and sublimation point
of all elements. At atmospheric pressure it has no melting point as its triple point is at 10.8 ± 0.2 MPa and
4600 ± 300 K,[2][3] so it sublimates at about 3900 K.[15][16]

Carbon sublimes in a carbon arc which has a temperature of about 5800 K. Thus, irrespective of its
allotropic form, carbon remains solid at higher temperatures than the highest melting point metals such as
tungsten or rhenium. Although thermodynamically prone to oxidation, carbon resists oxidation more
effectively than elements such as iron and copper that are weaker reducing agents at room temperature.

Carbon compounds form the basis of all known life on Earth, and the carbon-nitrogen cycle provides
some of the energy produced by the Sun and other stars. Although it forms an extraordinary variety of
compounds, most forms of carbon are comparatively unreactive under normal conditions. At standard
temperature and pressure, it resists all but the strongest oxidizers. It does not react with sulfuric acid,
hydrochloric acid, chlorine or any alkalis. At elevated temperatures carbon reacts with oxygen to form
carbon oxides, and will reduce such metal oxides as iron oxide to the metal. This exothermic reaction is
used in the iron and steel industry to control the carbon content of steel:

Fe3O4 + 4 C(s) → 3 Fe(s) + 4 CO(g)

with sulfur to form carbon disulfide and with steam in the coal-gas reaction:

C(s) + H2O(g) → CO(g) + H2(g).

Carbon combines with some metals at high temperatures to form metallic carbides, such as the iron
carbide cementite in steel, and tungsten carbide, widely used as an abrasive and for making hard tips for
cutting tools.

As of 2009, graphene appears to be the strongest material ever tested.[17] However, the process of
separating it from graphite will require some technological development before it is economical enough to
be used in industrial processes.[18]

The system of carbon allotropes spans a range of extremes:

Synthetic nanocrystalline diamond is the hardest


Graphite is one of the softest materials known.
material known.

Diamond is the ultimate abrasive. Graphite is a very good lubricant.

Diamond is an excellent electrical insulator. Graphite is a conductor of electricity.

Diamond is the best known naturally occurring Some forms of graphite are used for thermal
thermal conductor insulation (i.e. firebreaks and heat shields)

Diamond is highly transparent. Graphite is opaque.

Diamond crystallizes in the cubic system. Graphite crystallizes in the hexagonal system.

432
Carbon nanotubes are among the most anisotropic
Amorphous carbon is completely isotropic.
materials ever produced.

Allotropes

Main article: Allotropes of carbon

Atomic carbon is a very short-lived species and, therefore, carbon is stabilized in various multi-atomic
structures with different molecular configurations called allotropes. The three relatively well-known
allotropes of carbon are amorphous carbon, graphite, and diamond. Once considered exotic, fullerenes are
nowadays commonly synthesized and used in research; they include buckyballs,[19][20] carbon
nanotubes,[21] carbon nanobuds[22] and nanofibers.[23][24] Several other exotic allotropes have also been
discovered, such as lonsdaleite,[25] glassy carbon,[26] carbon nanofoam[27] and linear acetylenic carbon.[28]

 The amorphous form is an assortment of carbon atoms in a non-crystalline, irregular, glassy state,
which is essentially graphite but not held in a crystalline macrostructure. It is present as a powder,
and is the main constituent of substances such as charcoal, lampblack (soot) and activated carbon.
 At normal pressures carbon takes the form of graphite, in which each atom is bonded trigonally to
three others in a plane composed of fused hexagonal rings, just like those in aromatic
hydrocarbons. The resulting network is 2-dimensional, and the resulting flat sheets are stacked
and loosely bonded through weak van der Waals forces. This gives graphite its softness and its
cleaving properties (the sheets slip easily past one another). Because of the delocalization of one
of the outer electrons of each atom to form a π-cloud, graphite conducts electricity, but only in the
plane of each covalently bonded sheet. This results in a lower bulk electrical conductivity for
carbon than for most metals. The delocalization also accounts for the energetic stability of
graphite over diamond at room temperature.

433
Some allotropes of carbon: a) diamond; b) graphite; c) lonsdaleite; d–f) fullerenes (C60, C540, C70);
g) amorphous carbon; h) carbon nanotube.

 At very high pressures carbon forms the more compact allotrope diamond, having nearly twice
the density of graphite. Here, each atom is bonded tetrahedrally to four others, thus making a 3-
dimensional network of puckered six-membered rings of atoms. Diamond has the same cubic
structure as silicon and germanium and because of the strength of the carbon-carbon bonds, it is
the hardest naturally occurring substance in terms of resistance to scratching. Contrary to the
popular belief that "diamonds are forever", they are in fact thermodynamically unstable under
normal conditions and transform into graphite.[12] But due to a high activation energy barrier, the
transition into graphite is so extremely slow at room temperature as to be unnoticeable.
 Under some conditions, carbon crystallizes as lonsdaleite. This form has a hexagonal crystal
lattice where all atoms are covalently bonded. Therefore, all properties of lonsdaleite are close to
those of diamond.[25]
 Fullerenes have a graphite-like structure, but instead of purely hexagonal packing, they also
contain pentagons (or even heptagons) of carbon atoms, which bend the sheet into spheres,
ellipses or cylinders. The properties of fullerenes (split into buckyballs, buckytubes and
nanobuds) have not yet been fully analyzed and represent an intense area of research in
nanomaterials. The names "fullerene" and "buckyball" are given after Richard Buckminster
Fuller, popularizer of geodesic domes, which resemble the structure of fullerenes. The buckyballs
are fairly large molecules formed completely of carbon bonded trigonally, forming spheroids (the
best-known and simplest is the soccerball-shaped structure C60 buckminsterfullerene).[19] Carbon
nanotubes are structurally similar to buckyballs, except that each atom is bonded trigonally in a
curved sheet that forms a hollow cylinder.[20][21] Nanobuds were first published in 2007 and are
hybrid bucky tube/buckyball materials (buckyballs are covalently bonded to the outer wall of a
nanotube) that combine the properties of both in a single structure.[22]
 Of the other discovered allotropes, carbon nanofoam is a ferromagnetic allotrope discovered in
1997. It consists of a low-density cluster-assembly of carbon atoms strung together in a loose
three-dimensional web, in which the atoms are bonded trigonally in six- and seven-membered
rings. It is among the lightest known solids, with a density of about 2 kg/m3.[29] Similarly, glassy
carbon contains a high proportion of closed porosity.[26] But unlike normal graphite, the graphitic
layers are not stacked like pages in a book, but have a more random arrangement. Linear
acetylenic carbon[28] has the chemical structure[28] -(C:::C)n-. Carbon in this modification is linear
with sp orbital hybridization, and is a polymer with alternating single and triple bonds. This type
of carbyne is of considerable interest to nanotechnology as its Young's modulus is forty times that
of the hardest known material – diamond.[30]

Allotropes of carbon

From Wikipedia, the free encyclopedia

Jump to: navigation, search

This article includes a list of references, but its sources remain unclear because it has
insufficient inline citations.
Please help to improve this article by introducing more precise citations where appropriate.
(October 2010)

434
This article needs additional citations for verification.
Please help improve this article by adding reliable references. Unsourced material may be
challenged and removed. (October 2010)

Eight allotropes of carbon: a) Diamond, b) Graphite, c) Lonsdaleite, d) C60 (Buckminsterfullerene or


buckyball), e) C540, f) C70, g) Amorphous carbon, and h) single-walled carbon nanotube or buckytube.

This is a list of the allotropes of carbon.

Contents

[hide]

 1 Diamond
 2 Graphite
o 2.1 Graphene
 3 Amorphous carbon
 4 Buckminsterfullerenes
o 4.1 Carbon nanotubes
o 4.2 Carbon nanobuds
 5 Glassy carbon
 6 Carbon nanofoam
 7 Lonsdaleite (hexagonal diamond)
 8 Linear acetylenic carbon (LAC)
 9 Other possible forms
 10 Variability of carbon
 11 References

435
 12 External links

[edit] Diamond

Main article: Diamond

Diamond is one of the best known allotropes of carbon. The hardness and high dispersion of light of
diamond make it useful for both industrial applications and jewellery. Diamond is the hardest known
natural mineral. This makes it an excellent abrasive and makes it hold polish and luster extremely well.
No known naturally occurring substance can cut (or even scratch) a diamond.

The market for industrial-grade diamonds operates much differently from its gem-grade counterpart.
Industrial diamonds are valued mostly for their hardness and heat conductivity, making many of the
gemological characteristics of diamond, including clarity and color, mostly irrelevant. This helps explain
why 80% of mined diamonds (equal to about 100 million carats or 20 tonnes annually), unsuitable for use
as gemstones and known as bort, are destined for industrial use. In addition to mined diamonds, synthetic
diamonds found industrial applications almost immediately after their invention in the 1950s; another 400
million carats (80 tonnes) of synthetic diamonds are produced annually for industrial use—nearly four
times the mass of natural diamonds mined over the same period.

The dominant industrial use of diamond is in cutting, drilling (drill bits), grinding (diamond edged
cutters), and polishing. Most uses of diamonds in these technologies do not require large diamonds; in
fact, most diamonds that are gem-quality can find an industrial use. Diamonds are embedded in drill tips
or saw blades, or ground into a powder for use in grinding and polishing applications. Specialized
applications include use in laboratories as containment for high pressure experiments (see diamond anvil),
high-performance bearings, and limited use in specialized windows.

With the continuing advances being made in the production of synthetic diamond, future applications are
beginning to become feasible. Garnering much excitement is the possible use of diamond as a
semiconductor suitable to build microchips from, or the use of diamond as a heat sink in electronics.
Significant research efforts in Japan, Europe, and the United States are under way to capitalize on the
potential offered by diamond's unique material properties, combined with increased quality and quantity
of supply starting to become available from synthetic diamond manufacturers.

Each carbon atom in a diamond is covalently bonded to four other carbons in a tetrahedron. These
tetrahedrons together form a 3-dimensional network of six-membered carbon rings (similar to
cyclohexane), in the chair conformation, allowing for zero bond angle strain. This stable network of
covalent bonds and hexagonal rings, is the reason that diamond is so incredibly strong.

[edit] Graphite

Main article: Graphite

Graphite (named by Abraham Gottlob Werner in 1789, from the Greek γράφειν (graphein, "to
draw/write", for its use in pencils) is one of the most common allotropes of carbon. Unlike diamond,
graphite is an electrical conductor. Thus, it can be used in, for instance, electrical arc lamp electrodes.
Likewise, under standard conditions, graphite is the most stable form of carbon. Therefore, it is used in
thermochemistry as the standard state for defining the heat of formation of carbon compounds.

436
Graphite conducts electricity, due to delocalization of the pi bond electrons above and below the planes of
the carbon atoms. These electrons are free to move, so are able to conduct electricity. However, the
electricity is only conducted along the plane of the layers. In diamond, all four outer electrons of each
carbon atom are 'localised' between the atoms in covalent bonding. The movement of electrons is
restricted and diamond does not conduct an electric current. In graphite, each carbon atom uses only 3 of
its 4 outer energy level electrons in covalently bonding to three other carbon atoms in a plane. Each
carbon atom contributes one electron to a delocalised system of electrons that is also a part of the
chemical bonding. The delocalised electrons are free to move throughout the plane. For this reason,
graphite conducts electricity along the planes of carbon atoms, but does not conduct in a direction at right
angles to the plane.

Graphite powder is used as a dry lubricant. Although it might be thought that this industrially important
property is due entirely to the loose interlamellar coupling between sheets in the structure, in fact in a
vacuum environment (such as in technologies for use in space), graphite was found to be a very poor
lubricant. This fact led to the discovery that graphite's lubricity is due to adsorbed air and water between
the layers, unlike other layered dry lubricants such as molybdenum disulfide. Recent studies suggest that
an effect called superlubricity can also account for this effect.

When a large number of crystallographic defects bind these planes together, graphite loses its lubrication
properties and becomes what is known as pyrolytic carbon, a useful material in blood-contacting implants
such as prosthetic heart valves.

Natural and crystalline graphites are not often used in pure form as structural materials due to their shear-
planes, brittleness and inconsistent mechanical properties.

In its pure glassy (isotropic) synthetic forms, pyrolytic graphite and carbon fiber graphite are extremely
strong, heat-resistant (to 3000 °C) materials, used in reentry shields for missile nosecones, solid rocket
engines, high temperature reactors, brake shoes and electric motor brushes.

Intumescent or expandable graphites are used in fire seals, fitted around the perimeter of a fire door.
During a fire the graphite intumesces (expands and chars) to resist fire penetration and prevent the spread
of fumes. A typical start expansion temperature (SET) is between 150 and 300 °C.

Density: its specific gravity is 2.3, which makes it lighter than diamond.

Effect of heat: it is the most stable allotrope of carbon. At high temperatures and pressures (roughly 2000
°C and 5 GPa), it can be transformed into diamond. At about 700 °C it burns in oxygen forming carbon
dioxide.

Chemical activity: it is slightly more reactive than diamond. This is because the reactants are able to
penetrate between the hexagonal layers of carbon atoms in graphite. It is unaffected by ordinary solvents,
dilute acids, or fused alkalis. However, chromic acid oxidises it to carbon dioxide.

[edit] Graphene
Main article: Graphene

A single layer of graphite is called graphene and has extraordinary electrical, thermal, and physical
properties. It can be produced by epitaxy on an insulating or conducting substrate or by mechanical

437
exfoliation (repeated peeling) from graphite. Its applications may include replacing silicon in high-
performance electronic devices.

[edit] Amorphous carbon

Main article: Amorphous carbon

Amorphous carbon is the name used for carbon that does not have any crystalline structure. As with all
glassy materials, some short-range order can be observed, but there is no long-range pattern of atomic
positions. While entirely amorphous carbon can be produced, most amorphous carbon actually contains
microscopic crystals of graphite-like,[1] or even diamond-like carbon.[2]

Coal and soot or carbon black are informally called amorphous carbon. However, they are products of
pyrolysis (the process of decomposing a substance by the action of heat), which does not produce true
amorphous carbon under normal conditions. The coal industry divides coal up into various grades
depending on the amount of carbon present in the sample compared to the amount of impurities. The
highest grade, anthracite, is about 90% carbon and 10% other elements. Bituminous coal is about 75-90%
carbon, and lignite is the name for coal that is around 55% carbon.

[edit] Buckminsterfullerenes

Part of a series of articles on

Nanomaterials
Fullerenes

Carbon nanotubes
Buckminsterfullerene
Fullerene chemistry
Applications
In popular culture
Timeline
Carbon allotropes
Nanoparticles

Quantum dots
Nanostructures
Colloidal gold
Silver nanoparticles
Iron nanoparticles
Platinum nanoparticles
See also
Nanotechnology
v · d · e
Main article: Fullerenes

438
The buckminsterfullerenes, or usually just fullerenes or buckyballs for short, were discovered in 1985 by a
team of scientists from Rice University and the University of Sussex, three of whom were awarded the
1996 Nobel Prize in Chemistry. They are named for the resemblance of their alliotropic structure to the
geodesic structures devised by the scientist and architect Richard Buckminster "Bucky" Fuller. Fullerenes
are molecules of varying sizes composed entirely of carbon, which take the form of a hollow sphere,
ellipsoid, or tube.

As of the early twenty-first century, the chemical and physical properties of fullerenes are still under
heavy study, in both pure and applied research labs. In April 2003, fullerenes were under study for
potential medicinal use — binding specific antibiotics to the structure to target resistant bacteria and even
target certain cancer cells such as melanoma.

[edit] Carbon nanotubes


Main article: Carbon nanotube

Carbon nanotubes, also called buckytubes, are cylindrical carbon molecules with novel properties that
make them potentially useful in a wide variety of applications (e.g., nano-electronics, optics, materials
applications, etc.). They exhibit extraordinary strength, unique electrical properties, and are efficient
conductors of heat. Inorganic nanotubes have also been synthesized. A nanotube is a member of the
fullerene structural family, which also includes buckyballs. Whereas buckyballs are spherical in shape, a
nanotube is cylindrical, with at least one end typically capped with a hemisphere of the buckyball
structure. Their name is derived from their size, since the diameter of a nanotube is on the order of a few
nanometers (approximately 50,000 times smaller than the width of a human hair), while they can be up to
several centimeters in length. There are two main types of nanotubes: single-walled nanotubes (SWNTs)
and multi-walled nanotubes (MWNTs).

[edit] Carbon nanobuds

Computer models of stable nanobud structures

Main article: Carbon nanobud

Carbon nanobuds are a newly discovered allotrope of carbon in which fullerene like "buds" are
covalently attached to the outer sidewalls of the carbon nanotubes. This hybrid material has useful
properties of both fullerenes and carbon nanotubes. In particular, they have been found to be
exceptionally good field emitters.

[edit] Glassy carbon

439
Main article: Glassy carbon

This section appears to contradict itself. Please see its talk page for more information.
(December 2010)

Glassy carbon or vitreous carbon is a class of non-graphitizing carbon widely used as an electrode
material in electrochemistry, as well as for high temperature crucibles and as a component of some
prosthetic devices. It was first produced by workers at the laboratories of The General Electric Company,
UK, in the early 1960s, using cellulose as the starting material. A short time later, Japanese workers
produced a similar material from phenolic resin.

It was first produced by Bernard Redfern in the mid 1950s at the laboratories of The Carborundum
Company, Trafford Park, Manchester, UK. He set out to develop a polymer matrix to mirror a diamond
structure and discovered a resole (phenolic) resin that would, with special preparation, set without a
catalyst. Using this resin the first glassy carbon was produced. Patents were filed some of which were
withdrawn in the interests of national security. Original research samples of resin and product exist.

The preparation of glassy carbon involves subjecting the organic precursors to a series of heat treatments
at temperatures up to 3000 °C. Unlike many non-graphitizing carbons, they are impermeable to gases and
are chemically extremely inert, especially those prepared at very high temperatures. It has been
demonstrated that the rates of oxidation of certain glassy carbons in oxygen, carbon dioxide or water
vapour are lower than those of any other carbon. They are also highly resistant to attack by acids. Thus,
while normal graphite is reduced to a powder by a mixture of concentrated sulfuric and nitric acids at
room temperature, glassy carbon is unaffected by such treatment, even after several months.

[edit] Carbon nanofoam

Main article: Carbon nanofoam

Carbon nanofoam is the fifth known allotrope of carbon discovered in 1997 by Andrei V. Rode and co-
workers at the Australian National University in Canberra. It consists of a low-density cluster-assembly of
carbon atoms strung together in a loose three-dimensional web.

Each cluster is about 6 nanometers wide and consists of about 4000 carbon atoms linked in graphite-like
sheets that are given negative curvature by the inclusion of heptagons among the regular hexagonal
pattern. This is the opposite of what happens in the case of buckminsterfullerenes, in which carbon sheets
are given positive curvature by the inclusion of pentagons.

The large-scale structure of carbon nanofoam is similar to that of an aerogel, but with 1% of the density of
previously produced carbon aerogels - only a few times the density of air at sea level. Unlike carbon
aerogels, carbon nanofoam is a poor electrical conductor.

[edit] Lonsdaleite (hexagonal diamond)

Main article: Lonsdaleite

Lonsdaleite is a hexagonal allotrope of the carbon allotrope diamond, believed to form from graphite
present in meteorites upon their impact to Earth. The great heat and stress of the impact transforms the

440
graphite into diamond, but retains graphite's hexagonal crystal lattice. Hexagonal diamond has also been
synthesized in the laboratory, by compressing and heating graphite either in a static press or using
explosives. It can also be produced by the thermal decomposition of a polymer, poly(hydridocarbyne), at
atmospheric pressure, under inert gas atmosphere (e.g. argon, nitrogen), starting at temperature 110 °C
(230 °F).[3][4][5]

[edit] Linear acetylenic carbon (LAC)

Main article: Carbyne

A one-dimensional carbon polymer with the structure -(C:::C)n-.

[edit] Other possible forms

Crystal structure of C8 cubic carbon

 Chaoite is a mineral believed to have been formed in meteorite impacts. It has been described as
slightly harder than graphite with a reflection colour of grey to white. However, the existence of
carbyne phases is disputed – see the entry on chaoite for details.

 Metallic carbon: Theoretical studies have shown that there are regions in the phase diagram, at
extremely high pressures, where carbon has metallic character.[6]

 At ultrahigh pressures of above 1000 GPa, diamond is predicted to transform into the so-called C8
structure, a body-centered cubic structure with 8 atoms in the unit cell. This cubic carbon phase
might have importance in astrophysics. Its structure is known in one of the metastable phases of
silicon and is similar to cubane.[7] Superdense and superhard material resembling this phase has
been synthesized recently.[8][9]

 There is an evidence that white dwarf stars have a core of crystallized carbon and oxygen nuclei.
The largest of these found in the universe so far, BPM 37093, is located 50 light-years (4.7×1014
km) away in the constellation Centaurus. A news release from the Harvard-Smithsonian Center

441
for Astrophysics described the 2,500-mile (4,000 km)-wide stellar core as a diamond,[10] and it
was named as Lucy, after the Beatles' song "Lucy in the Sky With Diamonds";[11] however, it is
more likely an exotic form of carbon.

 Prismane C8 is a theoretically-predicted metastable carbon allotrope comprising an atomic


cluster of eight carbon atoms, with the shape of an elongated triangular bipyramid—a six-atom
triangular prism with two more atoms above and below its bases.[12]

[edit] Variability of carbon

Diamond and graphite are two allotropes of carbon: pure forms of the same element that differ in
structure.

The system of carbon allotropes spans an astounding range of extremes, considering that they are all
merely structural formations of the same element.

Between diamond and graphite:

 Diamond crystallizes in the cubic system but graphite crystallizes in the hexagonal system.
 Diamond is clear and transparent, but graphite is black and opaque
 Diamond is hardest mineral known (10 on the Mohs scale), but graphite is one of the softest (1–2
on Mohs scale).
 Diamond is the ultimate abrasive, but graphite is soft and is a very good lubricant.
 Diamond is an excellent electrical insulator, but graphite is a conductor of electricity.
 Diamond is an excellent thermal conductor, but some forms of graphite are used for thermal
insulation (for example heat shields and firebreaks)

Despite the hardness of diamonds, the chemical bonds that hold the carbon atoms in diamonds together
are actually weaker than those that hold together graphite. The difference is that in diamond, the bonds
form an inflexible three-dimensional lattice. In graphite, the atoms are tightly bonded into sheets, but the
sheets can slide easily making graphite soft.

Occurrence

442
Graphite ore

Raw diamond crystal.

"Present day" (1990s) sea surface dissolved inorganic carbon concentration (from the GLODAP
climatology)

Carbon is the fourth most abundant chemical element in the universe by mass after hydrogen, helium, and
oxygen. Carbon is abundant in the Sun, stars, comets, and in the atmospheres of most planets. Some
meteorites contain microscopic diamonds that were formed when the solar system was still a
protoplanetary disk. Microscopic diamonds may also be formed by the intense pressure and high
temperature at the sites of meteorite impacts.[31]

In combination with oxygen in carbon dioxide, carbon is found in the Earth's atmosphere (approximately
810 gigatonnes of carbon) and dissolved in all water bodies (approximately 36,000 gigatonnes of carbon).
Around 1,900 gigatonnes of carbon are present in the biosphere. Hydrocarbons (such as coal, petroleum,
and natural gas) contain carbon as well—coal "reserves" (not "resources") amount to around
900 gigatonnes, and oil reserves around 150 gigatonnes. Proven sources of natural gas are about 175

443
trillion cubic metres (representing about 105 gigatonnes carbon), but it is estimated that there are also
about 900 trillion cubic metres of "unconventional" gas such as shale gas, representing about 540
gigatonnes carbon.[32]

Carbon is a major component in very large masses of carbonate rock (limestone, dolomite, marble etc.).

Coal is a significant commercial source of mineral carbon; anthracite containing 92–98% carbon[33] and
the largest source (4,000 Gt, or 80% of coal, gas and oil reserves) of carbon in a form suitable for use as
fuel.[34]

Graphite is found in large quantities in the United States (mostly in New York and Texas), Russia,
Mexico, Greenland, and India.

Natural diamonds occur in the rock kimberlite, found in ancient volcanic "necks," or "pipes". Most
diamond deposits are in Africa, notably in South Africa, Namibia, Botswana, the Republic of the Congo,
and Sierra Leone. There are also deposits in Arkansas, Canada, the Russian Arctic, Brazil and in Northern
and Western Australia.

Diamonds are now also being recovered from the ocean floor off the Cape of Good Hope. However,
though diamonds are found naturally, about 30% of all industrial diamonds used in the U.S. are now
made synthetically.

Carbon-14 is formed in upper layers of the troposphere and the stratosphere, at altitudes of 9–15 km, by a
reaction that is precipitated by cosmic rays. Thermal neutrons are produced that collide with the nuclei of
nitrogen-14, forming carbon-14 and a proton.

Isotopes

Main article: Isotopes of carbon

Isotopes of carbon are atomic nuclei that contain six protons plus a number of neutrons (varying from 2 to
16). Carbon has two stable, naturally occurring isotopes.[9] The isotope carbon-12 (12C) forms 98.93% of
the carbon on Earth, while carbon-13 (13C) forms the remaining 1.07%.[9] The concentration of 12C is
further increased in biological materials because biochemical reactions discriminate against 13C.[35] In
1961 the International Union of Pure and Applied Chemistry (IUPAC) adopted the isotope carbon-12 as
the basis for atomic weights.[36] Identification of carbon in NMR experiments is done with the isotope 13C.

Carbon-14 (14C) is a naturally occurring radioisotope which occurs in trace amounts on Earth of up to 1
part per trillion (0.0000000001%), mostly confined to the atmosphere and superficial deposits,
particularly of peat and other organic materials.[37] This isotope decays by 0.158 MeV β- emission.
Because of its relatively short half-life of 5730 years, 14C is virtually absent in ancient rocks, but is
created in the upper atmosphere (lower stratosphere and upper troposphere) by interaction of nitrogen
with cosmic rays.[38] The abundance of 14C in the atmosphere and in living organisms is almost constant,
but decreases predictably in their bodies after death. This principle is used in radiocarbon dating, invented
in 1949, which has been used extensively to determine the age of carbonaceous materials with ages up to
about 40,000 years.[39][40]

There are 15 known isotopes of carbon and the shortest-lived of these is 8C which decays through proton
emission and alpha decay and has a half-life of 1.98739x10−21 s.[41] The exotic 19C exhibits a nuclear halo,

444
which means its radius is appreciably larger than would be expected if the nucleus were a sphere of
constant density.[42]

Formation in stars

Main articles: Triple-alpha process and CNO cycle

Formation of the carbon atomic nucleus requires a nearly simultaneous triple collision of alpha particles
(helium nuclei) within the core of a giant or supergiant star. This happens in conditions of > 100
megakelvin temperature and helium concentration that the rapid expansion and cooling of the early
universe prohibited, and therefore no significant carbon was created during the Big Bang. Instead, the
interiors of stars in the horizontal branch transform three helium nuclei into carbon by means of this
triple-alpha process. In order to be available for formation of life as we know it, this carbon must then
later be scattered into space as dust, in supernova explosions, as part of the material which later forms
second, third-generation star systems which have planets accreted from such dust. The Solar System is
one such third-generation star system.

One of the fusion mechanisms powering stars is the carbon-nitrogen cycle.

Rotational transitions of various isotopic forms of carbon monoxide (e.g. 12CO, 13CO, and C18O) are
detectable in the submillimeter regime, and are used in the study of newly forming stars in molecular
clouds.

Carbon cycle

Main article: Carbon cycle

Diagram of the carbon cycle. The black numbers indicate how much carbon is stored in various
reservoirs, in billions of tons ("GtC" stands for gigatons of carbon; figures are circa 2004). The purple
numbers indicate how much carbon moves between reservoirs each year. The sediments, as defined in
this diagram, do not include the ~70 million GtC of carbonate rock and kerogen.

445
Under terrestrial conditions, conversion of one element to another is very rare. Therefore, the amount of
carbon on Earth is effectively constant. Thus, processes that use carbon must obtain it somewhere and
dispose of it somewhere else. The paths that carbon follows in the environment make up the carbon cycle.
For example, plants draw carbon dioxide out of their environment and use it to build biomass, as in
carbon respiration or the Calvin cycle, a process of carbon fixation. Some of this biomass is eaten by
animals, whereas some carbon is exhaled by animals as carbon dioxide. The carbon cycle is considerably
more complicated than this short loop; for example, some carbon dioxide is dissolved in the oceans; dead
plant or animal matter may become petroleum or coal, which can burn with the release of carbon, should
bacteria not consume it.[43]

Compounds

Organic compounds

Main article: Organic compound

Structural formula of methane, the simplest possible organic compound.

446
Correlation between the carbon cycle and formation of organic compounds. In plants, carbon dioxide
formed by carbon fixation can join with water in photosynthesis (green) to form organic compounds,
which can be used and further converted by both plants and animals.

Carbon has the ability to form very long chains of interconnecting C-C bonds. This property is called
catenation. Carbon-carbon bonds are strong, and stable. This property allows carbon to form an almost
infinite number of compounds; in fact, there are more known carbon-containing compounds than all the
compounds of the other chemical elements combined except those of hydrogen (because almost all
organic compounds contain hydrogen too).

The simplest form of an organic molecule is the hydrocarbon—a large family of organic molecules that
are composed of hydrogen atoms bonded to a chain of carbon atoms. Chain length, side chains and
functional groups all affect the properties of organic molecules. By IUPAC's definition, all the other
organic compounds are functionalized compounds of hydrocarbons.[citation needed]

Carbon occurs in all known organic life and is the basis of organic chemistry. When united with
hydrogen, it forms various hydrocarbons which are important to industry as refrigerants, lubricants,
solvents, as chemical feedstock for the manufacture of plastics and petrochemicals and as fossil fuels.

When combined with oxygen and hydrogen, carbon can form many groups of important biological
compounds including sugars, lignans, chitins, alcohols, fats, and aromatic esters, carotenoids and
terpenes. With nitrogen it forms alkaloids, and with the addition of sulfur also it forms antibiotics, amino
acids, and rubber products. With the addition of phosphorus to these other elements, it forms DNA and
RNA, the chemical-code carriers of life, and adenosine triphosphate (ATP), the most important energy-
transfer molecule in all living cells.

447
Inorganic compounds

Main article: Compounds of carbon

Commonly carbon-containing compounds which are associated with minerals or which do not contain
hydrogen or fluorine, are treated separately from classical organic compounds; however the definition is
not rigid (see reference articles above). Among these are the simple oxides of carbon. The most prominent
oxide is carbon dioxide (CO2). This was once the principal constituent of the paleoatmosphere, but is a
minor component of the Earth's atmosphere today.[44] Dissolved in water, it forms carbonic acid (H2CO3),
but as most compounds with multiple single-bonded oxygens on a single carbon it is unstable.[45] Through
this intermediate, though, resonance-stabilized carbonate ions are produced. Some important minerals are
carbonates, notably calcite. Carbon disulfide (CS2) is similar.

The other common oxide is carbon monoxide (CO). It is formed by incomplete combustion, and is a
colorless, odorless gas. The molecules each contain a triple bond and are fairly polar, resulting in a
tendency to bind permanently to hemoglobin molecules, displacing oxygen, which has a lower binding
affinity.[46][47] Cyanide (CN–), has a similar structure, but behaves much like a halide ion (pseudohalogen).
For example it can form the nitride cyanogen molecule ((CN)2), similar to diatomic halides. Other
uncommon oxides are carbon suboxide (C3O2),[48] the unstable dicarbon monoxide (C2O),[49][50] carbon
trioxide (CO3),[51][52] cyclopentanepentone (C5O5),[53] cyclohexanehexone (C6O6) ,[53] and mellitic
anhydride (C12O9).

With reactive metals, such as tungsten, carbon forms either carbides (C4–), or acetylides (C2−
2) to form alloys with high melting points. These anions are also associated with methane and acetylene,
both very weak acids. With an electronegativity of 2.5,[54] carbon prefers to form covalent bonds. A few
carbides are covalent lattices, like carborundum (SiC), which resembles diamond.

Organometallic compounds

Main article: Organometallic chemistry

Organometallic compounds by definition contain at least one carbon-metal bond. A wide range of such
compounds exist; major classes include simple alkyl-metal compounds (e.g. tetraethyllead), η2-alkene
compounds (e.g. Zeise's salt, and η3-allyl compounds (e.g. allylpalladium chloride dimer; metallocenes
containing cyclopentadienyl ligands (e.g. ferrocene); and transition metal carbene complexes. Many metal
carbonyls exist (e.g. tetracarbonylnickel); some workers consider the carbon monoxide ligand to be
purely inorganic, and not organometallic.

While carbon is understood to exclusively form four bonds, an interesting compound containing an
octahedral hexacoordinated carbon atom has been reported. The cation of the compound is
[(Ph3PAu)6C]2+. This phenomenon has been attributed to the aurophilicity of the gold ligands.[55]

History and etymology

448
Antoine Lavoisier in his youth

The English name carbon comes from the Latin carbo for coal and charcoal,[56] and from hence also
comes the French charbon, meaning charcoal. In German, Dutch and Danish, the names for carbon are
Kohlenstoff, koolstof and kulstof respectively, all literally meaning coal-substance.

Carbon was discovered in prehistory and was known in the forms of soot and charcoal to the earliest
human civilizations. Diamonds were known probably as early as 2500 BCE in China, while carbon in the
form of charcoal was made around Roman times by the same chemistry as it is today, by heating wood in
a pyramid covered with clay to exclude air.[57][58]

Carl Wilhelm Scheele

In 1722, René Antoine Ferchault de Réaumur demonstrated that iron was transformed into steel through
the absorption of some substance, now known to be carbon.[59] In 1772, Antoine Lavoisier showed that
diamonds are a form of carbon, when he burned samples of carbon and diamond then showed that neither
produced any water and that both released the same amount of carbon dioxide per gram. In 1779,[60] Carl
Wilhelm Scheele showed that graphite, which had been thought of as a form of lead, was instead a type of
carbon.[61] In 1786, the French scientists Claude Louis Berthollet, Gaspard Monge and C. A.
Vandermonde then showed that this substance was carbon.[62] In their publication they proposed the name
carbone (Latin carbonum) for this element. Antoine Lavoisier listed carbon as an element in his 1789
textbook.[63]

449
A new allotrope of carbon, fullerene, that was discovered in 1985[64] includes nanostructured forms such
as buckyballs and nanotubes.[19] Their discoverers (Curl, Kroto, and Smalley) received the Nobel Prize in
Chemistry in 1996.[65] The resulting renewed interest in new forms lead to the discovery of further exotic
allotropes, including glassy carbon, and the realization that "amorphous carbon" is not strictly
amorphous.[26]

Production

Graphite

Main article: Graphite

Commercially viable natural deposits of graphite occur in many parts of the world, but the most important
sources economically are in China, India, Brazil, and North Korea.[66] Graphite deposits are of
metamorphic origin, found in association with quartz, mica and feldspars in schists, gneisses and
metamorphosed sandstones and limestone as lenses or veins, sometimes of a meter or more in thickness.
Deposits of graphite in Borrowdale, Cumberland, England were at first of sufficient size and purity that,
until the 1800s, pencils were made simply by sawing blocks of natural graphite into strips before encasing
the strips in wood. Today, smaller deposits of graphite are obtained by crushing the parent rock and
floating the lighter graphite out on water.

According to the USGS, world production of natural graphite in 2006 was 1.03 million tons and in 2005
was 1.04 million tons (revised), of which the following major exporters produced: China produced
720,000 tons in both 2006 and 2005, Brazil 75,600 tons in 2006 and 75,515 tons in 2005 (revised),
Canada 28,000 tons in both years, and Mexico (amorphous) 12,500 tons in 2006 and 12,357 tons in 2005
(revised). In addition, there are two specialist producers: Sri Lanka produced 3,200 tons in 2006 and
3,000 tons in 2005 of lump or vein graphite, and Madagascar produced 15,000 tons in both years, a large
portion of it "crucible grade" or very large flake graphite. Some other producers produce very small
amounts of "crucible grade".

According to the USGS, U.S. (synthetic) graphite electrode production in 2006 was 132,000 tons valued
at $495 million and in 2005 was 146,000 tons valued at $391 million, and high-modulus graphite (carbon)
fiber production in 2006 was 8,160 tons valued at $172 million and in 2005 was 7,020 tons valued at
$134 million.

Diamond

Main article: Diamond

Diamond output in 2005

450
The diamond supply chain is controlled by a limited number of powerful businesses, and is also highly
concentrated in a small number of locations around the world (see figure).

Only a very small fraction of the diamond ore consists of actual diamonds. The ore is crushed, during
which care has to be taken in order to prevent larger diamonds from being destroyed in this process and
subsequently the particles are sorted by density. Today, diamonds are located in the diamond-rich density
fraction with the help of X-ray fluorescence, after which the final sorting steps are done by hand. Before
the use of X-rays became commonplace, the separation was done with grease belts; diamonds have a
stronger tendency to stick to grease than the other minerals in the ore.[67]

Historically diamonds were known to be found only in alluvial deposits in southern India.[68] India led the
world in diamond production from the time of their discovery in approximately the 9th century BCE [69] to
the mid-18th century AD, but the commercial potential of these sources had been exhausted by the late
18th century and at that time India was eclipsed by Brazil where the first non-Indian diamonds were
found in 1725.[70]

Diamond production of primary deposits (kimberlites and lamproites) only started in the 1870s after the
discovery of the Diamond fields in South Africa. Production has increased over time and now an
accumulated total of 4.5 billion carats have been mined since that date.[71] Interestingly 20% of that
amount has been mined in the last 5 years alone and during the last ten years 9 new mines have started
production while 4 more are waiting to be opened soon. Most of these mines are located in Canada,
Zimbabwe, Angola, and one in Russia.[71]

In the United States, diamonds have been found in Arkansas, Colorado, and Montana.[72][73] In 2004, a
startling discovery of a microscopic diamond in the United States[74] led to the January 2008 bulk-
sampling of kimberlite pipes in a remote part of Montana.[75]

Today, most commercially viable diamond deposits are in Russia, Botswana, Australia and the
Democratic Republic of Congo.[76] In 2005, Russia produced almost one-fifth of the global diamond
output, reports the British Geological Survey. Australia boasts the richest diamantiferous pipe with
production reaching peak levels of 42 metric tons (41 LT; 46 ST) per year in the 1990s.[72]

There are also commercial deposits being actively mined in the Northwest Territories of Canada, Siberia
(mostly in Yakutia territory, for example Mir pipe and Udachnaya pipe), Brazil, and in Northern and
Western Australia. Diamond prospectors continue to search the globe for diamond-bearing kimberlite and
lamproite pipes.

Applications

451
Pencil leads for mechanical pencils are made of graphite (often mixed with a clay or synthetic binder).

Sticks of vine and compressed charcoal.

A cloth of woven carbon filaments

Silicon carbide single crystal

452
The C60 fullerene in crystalline form

Tungsten carbide milling bits

Carbon is essential to all known living systems, and without it life as we know it could not exist (see
alternative biochemistry). The major economic use of carbon other than food and wood is in the form of
hydrocarbons, most notably the fossil fuel methane gas and crude oil (petroleum). Crude oil is used by the
petrochemical industry to produce, amongst others, gasoline and kerosene, through a distillation process,
in refineries. Cellulose is a natural, carbon-containing polymer produced by plants in the form of cotton,
linen, and hemp. Cellulose is mainly used for maintaining structure in plants. Commercially valuable
carbon polymers of animal origin include wool, cashmere and silk. Plastics are made from synthetic
carbon polymers, often with oxygen and nitrogen atoms included at regular intervals in the main polymer
chain. The raw materials for many of these synthetic substances come from crude oil.

The uses of carbon and its compounds are extremely varied. It can form alloys with iron, of which the
most common is carbon steel. Graphite is combined with clays to form the 'lead' used in pencils used for
writing and drawing. It is also used as a lubricant and a pigment, as a molding material in glass
manufacture, in electrodes for dry batteries and in electroplating and electroforming, in brushes for
electric motors and as a neutron moderator in nuclear reactors.

Charcoal is used as a drawing material in artwork, for grilling, and in many other uses including iron
smelting. Wood, coal and oil are used as fuel for production of energy and space heating. Gem quality
diamond is used in jewelry, and Industrial diamonds are used in drilling, cutting and polishing tools for
machining metals and stone. Plastics are made from fossil hydrocarbons, and carbon fiber, made by
pyrolysis of synthetic polyester fibers is used to reinforce plastics to form advanced, lightweight
composite materials. Carbon fiber is made by pyrolysis of extruded and stretched filaments of

453
polyacrylonitrile (PAN) and other organic substances. The crystallographic structure and mechanical
properties of the fiber depend on the type of starting material, and on the subsequent processing. Carbon
fibers made from PAN have structure resembling narrow filaments of graphite, but thermal processing
may re-order the structure into a continuous rolled sheet. The result is fibers with higher specific tensile
strength than steel.[77]

Carbon black is used as the black pigment in printing ink, artist's oil paint and water colours, carbon
paper, automotive finishes, India ink and laser printer toner. Carbon black is also used as a filler in rubber
products such as tyres and in plastic compounds. Activated charcoal is used as an absorbent and adsorbent
in filter material in applications as diverse as gas masks, water purification and kitchen extractor hoods
and in medicine to absorb toxins, poisons, or gases from the digestive system. Carbon is used in chemical
reduction at high temperatures. Coke is used to reduce iron ore into iron. Case hardening of steel is
achieved by heating finished steel components in carbon powder. Carbides of silicon, tungsten, boron and
titanium, are among the hardest known materials, and are used as abrasives in cutting and grinding tools.
Carbon compounds make up most of the materials used in clothing, such as natural and synthetic textiles
and leather, and almost all of the interior surfaces in the built environment other than glass, stone and
metal.

Diamonds

The diamond industry can be broadly separated into two basically distinct categories: one dealing with
gem-grade diamonds and another for industrial-grade diamonds. While a large trade in both types of
diamonds exists, the two markets act in dramatically different ways.

A large trade in gem-grade diamonds exists. Unlike precious metals such as gold or platinum, gem
diamonds do not trade as a commodity: there is a substantial mark-up in the sale of diamonds, and there is
not a very active market for resale of diamonds.

The market for industrial-grade diamonds operates much differently from its gem-grade counterpart.
Industrial diamonds are valued mostly for their hardness and heat conductivity, making many of the
gemological characteristics of diamond, including clarity and color, mostly irrelevant. This helps explain
why 80% of mined diamonds (equal to about 100 million carats or 20,000 kg annually), unsuitable for use
as gemstones and known as bort, are destined for industrial use.[78] In addition to mined diamonds,
synthetic diamonds found industrial applications almost immediately after their invention in the 1950s;
another 3 billion carats (600 metric tons) of synthetic diamond is produced annually for industrial use.[79]
The dominant industrial use of diamond is in cutting, drilling, grinding, and polishing. Most uses of
diamonds in these technologies do not require large diamonds; in fact, most diamonds that are gem-
quality except for their small size, can find an industrial use. Diamonds are embedded in drill tips or saw
blades, or ground into a powder for use in grinding and polishing applications.[80] Specialized applications
include use in laboratories as containment for high pressure experiments (see diamond anvil cell), high-
performance bearings, and limited use in specialized windows.[81][82] With the continuing advances being
made in the production of synthetic diamonds, future applications are beginning to become feasible.
Garnering much excitement is the possible use of diamond as a semiconductor suitable to build
microchips from, or the use of diamond as a heat sink in electronics.[83]

Precautions

454
Worker at carbon black plant in Sunray, Texas (photo by John Vachon, 1942)

Pure carbon has extremely low toxicity to humans and can be handled and even ingested safely in the
form of graphite or charcoal. It is resistant to dissolution or chemical attack, even in the acidic contents of
the digestive tract, for example. Consequently once it enters into the body's tissues it is likely to remain
there indefinitely. Carbon black was probably one of the first pigments to be used for tattooing, and Ötzi
the Iceman was found to have carbon tattoos that survived during his life and for 5200 years after his
death.[84] However, inhalation of coal dust or soot (carbon black) in large quantities can be dangerous,
irritating lung tissues and causing the congestive lung disease coalworker's pneumoconiosis. Similarly,
diamond dust used as an abrasive can do harm if ingested or inhaled. Microparticles of carbon are
produced in diesel engine exhaust fumes, and may accumulate in the lungs.[85] In these examples, the
harmful effects may result from contamination of the carbon particles, with organic chemicals or heavy
metals for example, rather than from the carbon itself.

Carbon generally has low toxicity to almost all life on Earth, however to some creatures it can still be
toxic, for instance carbon nanoparticles are a deadly toxins to Drosophila[86]

Carbon may also burn vigorously and brightly in the presence of air at high temperatures, as in the
Windscale fire, which was caused by sudden release of stored Wigner energy in the graphite core. Large
accumulations of coal, which have remained inert for hundreds of millions of years in the absence of
oxygen, may spontaneously combust when exposed to air, for example in coal mine waste tips.

The great variety of carbon compounds include such lethal poisons as tetrodotoxin, the lectin ricin from
seeds of the castor oil plant Ricinus communis, cyanide (CN-) and carbon monoxide; and such essentials
to life as glucose and protein.

Oxygen

From Wikipedia, the free encyclopedia

Jump to: navigation, search

This article is about the chemical element and its most stable form, O2 or dioxygen. For other forms of
this element, see Allotropes of oxygen. For other uses, see Oxygen (disambiguation).

455
nitrogen ← oxygen → fluorine

-

O

S

8O

Periodic table

Appearance

Colorless gas; pale blue liquid. Oxygen bubbles rise in this rotated photo
of liquid oxygen.

Spectral lines of oxygen

General properties

Name, symbol, number oxygen, O, 8

Pronunciation /ˈɒksɪdʒɪn/ OK-si-jin

Element category nonmetal, chalcogens

456
Group, period, block 16, 2, p

Standard atomic weight 15.9994g·mol−1

Electron configuration 1s2 2s2 2p4

Electrons per shell 2, 6 (Image)

Physical properties

Phase gas

(0 °C, 101.325 kPa)


Density
1.429 g/L

Liquid density at b.p. 1.141 g·cm−3

Melting point 54.36 K-218.79 ° ,C-361.82 ° ,F

Boiling point 90.20 K-182.95 ° ,C-297.31 ° ,F

Critical point 154.59 K, 5.043 MPa

Heat of fusion (O2) 0.444 kJ·mol−1

Heat of vaporization (O2) 6.82 kJ·mol−1

(25 °C) (O2)


Specific heat capacity
29.378 J·mol−1·K−1

Vapor pressure

P (Pa) 1 10 100 1k 10 k 100 k

at T (K) 61 73 90

Atomic properties

2, 1, −1, −2
Oxidation states
(neutral oxide)

457
Electronegativity 3.44 (Pauling scale)

Ionization energies 1st: 1313.9 kJ·mol−1


(more)
2nd: 3388.3 kJ·mol−1

3rd: 5300.5 kJ·mol−1

Covalent radius 66±2 pm

Van der Waals radius 152 pm

Miscellanea

Crystal structure cubic

Magnetic ordering paramagnetic

Thermal conductivity (300 K) 26.58x10-3 W·m−1·K−1

Speed of sound (gas, 27 °C) 330 m/s

CAS registry number 7782-44-7

Most stable isotopes

Main article: Isotopes of oxygen

iso NA half-life DM DE (MeV) DP


16 16
O 99.76% O is stable with 8 neutrons

17 17
O 0.039% O is stable with 9 neutrons

18 18
O 0.201% O is stable with 10 neutrons

v·d·e

Oxygen ( /ˈɒksɪdʒɪn/ OK-si-jin) is the element with atomic number 8 and represented by the symbol O.
Its name derives from the Greek roots ὀξύς (oxys) (acid, literally "sharp", referring to the sour taste of
acids) and -γενής (-genēs) (producer, literally begetter), because at the time of naming, it was mistakenly
thought that all acids required oxygen in their composition. At standard temperature and pressure, two

458
atoms of the element bind to form dioxygen, a colorless, odorless, tasteless diatomic gas with the formula
O2.

Oxygen is a member of the chalcogen group on the periodic table, and is a highly reactive nonmetallic
element that readily forms compounds (notably oxides) with almost all other elements. By mass, oxygen
is the third most abundant element in the universe after hydrogen and helium[1] and the most abundant
element by mass in the Earth's crust, making up almost half of the crust's mass.[2] Free oxygen is too
chemically reactive to appear on Earth without the photosynthetic action of living organisms, which use
the energy of sunlight to produce elemental oxygen from water. Elemental O2 only began to accumulate
in the atmosphere after the evolutionary appearance of these organisms, roughly 2.5 billion years ago.[3]
Diatomic oxygen gas constitutes 20.8% of the volume of air.[4]

Because it comprises most of the mass in water, oxygen comprises most of the mass of living organisms
(for example, about two-thirds of the human body's mass). All major classes of structural molecules in
living organisms, such as proteins, carbohydrates, and fats, contain oxygen, as do the major inorganic
compounds that comprise animal shells, teeth, and bone. Elemental oxygen is produced by cyanobacteria,
algae and plants, and is used in cellular respiration for all complex life. Oxygen is toxic to obligately
anaerobic organisms, which were the dominant form of early life on Earth until O2 began to accumulate in
the atmosphere. Another form (allotrope) of oxygen, ozone (O3), helps protect the biosphere from
ultraviolet radiation with the high-altitude ozone layer, but is a pollutant near the surface where it is a by-
product of smog. At even higher low earth orbit altitudes atomic oxygen is a significant presence and a
cause of erosion for spacecraft.[5]

Oxygen was independently discovered by Carl Wilhelm Scheele, in Uppsala, in 1773 or earlier, and
Joseph Priestley in Wiltshire, in 1774, but Priestley is often given priority because his work was
published first. The name oxygen was coined in 1777 by Antoine Lavoisier,[6] whose experiments with
oxygen helped to discredit the then-popular phlogiston theory of combustion and corrosion. Oxygen is
produced industrially by fractional distillation of liquefied air, use of zeolites with pressure-cycling to
concentrate oxygen from air, electrolysis of water and other means. Uses of oxygen include the
production of steel, plastics and textiles; rocket propellant; oxygen therapy; and life support in aircraft,
submarines, spaceflight and diving.

Contents

[hide]

 1 Characteristics
o 1.1 Structure
o 1.2 Allotropes
o 1.3 Physical properties
o 1.4 Isotopes and stellar origin
o 1.5 Occurrence
 2 Biological role
o 2.1 Photosynthesis and respiration
o 2.2 Build-up in the atmosphere
 3 History
o 3.1 Early experiments
o 3.2 Phlogiston theory
o 3.3 Discovery
o 3.4 Lavoisier's contribution

459
o 3.5 Later history
 4 Industrial production
 5 Applications
o 5.1 Medical
o 5.2 Life support and recreational use
o 5.3 Industrial
o 5.4 Scientific
 6 Compounds
o 6.1 Oxides and other inorganic compounds
o 6.2 Organic compounds and biomolecules
 7 Safety and precautions
o 7.1 Toxicity
o 7.2 Combustion and other hazards
 8 See also
 9 Notes and citations
 10 References
 11 Further reading
 12 External links

Characteristics

Structure

Oxygen discharge (spectrum) tube

At standard temperature and pressure, oxygen is a colorless, odorless gas with the molecular formula O2,
in which the two oxygen atoms are chemically bonded to each other with a spin triplet electron
configuration. This bond has a bond order of two, and is often simplified in description as a double
bond[7] or as a combination of one two-electron bond and two three-electron bonds.[8]

Triplet oxygen (not to be confused with ozone, O3) is the ground state of the O2 molecule.[9] The electron
configuration of the molecule has two unpaired electrons occupying two degenerate molecular orbitals.[10]
These orbitals are classified as antibonding (weakening the bond order from three to two), so the diatomic
oxygen bond is weaker than the diatomic nitrogen triple bond in which all bonding molecular orbitals are
filled, but some antibonding orbitals are not.[9]

In normal triplet form, O2 molecules are paramagnetic. That is, they form a magnet in the presence of a
magnetic field—because of the spin magnetic moments of the unpaired electrons in the molecule, and the
negative exchange energy between neighboring O2 molecules.[11] Liquid oxygen is attracted to a magnet
to a sufficient extent that, in laboratory demonstrations, a bridge of liquid oxygen may be supported
against its own weight between the poles of a powerful magnet.[12][13]

460
Singlet oxygen is a name given to several higher-energy species of molecular O2 in which all the electron
spins are paired. It is much more reactive towards common organic molecules than is molecular oxygen
per se. In nature, singlet oxygen is commonly formed from water during photosynthesis, using the energy
of sunlight.[14] It is also produced in the troposphere by the photolysis of ozone by light of short
wavelength,[15] and by the immune system as a source of active oxygen.[16] Carotenoids in photosynthetic
organisms (and possibly also in animals) play a major role in absorbing energy from singlet oxygen and
converting it to the unexcited ground state before it can cause harm to tissues.[17]

Allotropes

Main article: Allotropes of oxygen

Ozone is a rare gas on Earth found mostly in the stratosphere.

The common allotrope of elemental oxygen on Earth is called dioxygen, O2. It has a bond length of
121 pm and a bond energy of 498 kJ·mol−1.[18] This is the form that is used by complex forms of life, such
as animals, in cellular respiration (see Biological role) and is the form that is a major part of the Earth's
atmosphere (see Occurrence). Other aspects of O2 are covered in the remainder of this article.

Trioxygen (O3) is usually known as ozone and is a very reactive allotrope of oxygen that is damaging to
lung tissue.[19] Ozone is produced in the upper atmosphere when O2 combines with atomic oxygen made
by the splitting of O2 by ultraviolet (UV) radiation.[6] Since ozone absorbs strongly in the UV region of
the spectrum, the ozone layer of the upper atmosphere functions as a protective radiation shield for the
planet.[6] Near the Earth's surface, however, it is a pollutant formed as a by-product of automobile
exhaust.[19] The metastable molecule tetraoxygen (O4) was discovered in 2001,[20][21] and was assumed to
exist in one of the six phases of solid oxygen. It was proven in 2006 that this phase, created by
pressurizing O2 to 20 GPa, is in fact a rhombohedral O8 cluster.[22] This cluster has the potential to be a
much more powerful oxidizer than either O2 or O3 and may therefore be used in rocket fuel.[20][21] A
metallic phase was discovered in 1990 when solid oxygen is subjected to a pressure of above 96 GPa [23]
and it was shown in 1998 that at very low temperatures, this phase becomes superconducting.[24]

461
Allotropes of oxygen

From Wikipedia, the free encyclopedia

Jump to: navigation, search

There are several known allotropes of oxygen:

 free radicals O1 – unstable


 dioxygen, O2 – colorless
 ozone, O3 – blue
 tetraoxygen, O4 – metastable
 solid oxygen – exists in six variously colored phases, of which one is O8 and another one metallic

Contents

[hide]

 1 Dioxygen
 2 Singlet oxygen
 3 Ozone
 4 Tetraoxygen
 5 Phases of solid oxygen
 6 References
 7 Further reading

[edit] Dioxygen

Main article: Oxygen

The common allotrope of elemental oxygen on Earth, O2, is known as dioxygen. Elemental oxygen is
most commonly encountered in this form, as about 21% (by volume) of Earth's atmosphere. O2 has a
bond length of 121 pm and a bond energy of 498 kJ/mol.[1]

Oxygen itself is a colourless gas with a boiling point of -183°C. It can be condensed out of air by cooling
with liquid nitrogen, which has a boiling point of -196°C. Liquid oxygen is pale blue in colour, and is
quite markedly paramagnetic : liquid oxygen contained in a flask suspended by a string is attracted to a
magnet.

[edit] Singlet oxygen

Main article: Singlet oxygen

Singlet oxygen is the common name used for the two metastable states of molecular oxygen (O2) with
higher energy than the ground state triplet oxygen. Because of the differences in their electron shells,
singlet oxygen has different chemical properties than triplet oxygen, including Diels-Alder reaction, or
absorbing and emitting light at different wavelengths. It can be generated in a photosensitized process by

462
energy transfer from dye molecules such as rose bengal, methylene blue or porphyrins, or by chemical
processes such as spontaneous decomposition of hydrogen trioxide in water or the reaction of hydrogen
peroxide with hypochlorite

[edit] Ozone

Main article: Ozone

Triatomic oxygen (Ozone, O3), is a very reactive allotrope of oxygen that is destructive to materials like
rubber and fabrics and is also damaging to lung tissue.[2] Traces of it can be detected as a sharp, chlorine-
like smell coming from electric motors, laser printers, and photocopiers. It was named "ozone" by
Christian Friedrich Schönbein, in 1840, from the Greek word ὠζώ (ozo) for smell.[3]

Ozone is thermodynamically unstable toward the more common dioxygen form, and is formed by
reaction of O2 with atomic oxygen produced by splitting of O2 by UV radiation in the upper atmosphere.[3]
Ozone absorbs strongly in the ultraviolet and functions as a shield for the biosphere against the mutagenic
and other damaging effects of solar UV radiation (see ozone layer).[3] Ozone is formed near the Earth's
surface by the photochemical disintegration of nitrogen dioxide from the exhaust of automobiles.[4]
Ground-level ozone is an air pollutant that is especially harmful for senior citizens, children, and people
with heart and lung conditions such as emphysema, bronchitis, and asthma.[5] The immune system
produces ozone as an antimicrobial (see below).[6] Liquid and solid O3 have a deeper-blue color than
ordinary oxygen and they are unstable and explosive.[3][7]

Ozone is a pale blue gas condensable to a dark blue liquid. It is formed whenever air is subjected to an
electrical discharge, and has the characteristic pungent odour of new-mown hay, or for those living in
urban environments, of subways - the so-called 'electrical odour'.

[edit] Tetraoxygen

Main article: Tetraoxygen

Tetraoxygen had been suspected to exist since the early 1900s, when it was known as oxozone, and was
identified in 2001 by a team led by F. Cacace at the University of Rome. The molecule O4 was thought to
be in one of the phases of solid oxygen later identified as O8. Cacace's team think that O4 probably
consists of two dumbbell-like O2 molecules loosely held together by induced dipole dispersion forces.

[edit] Phases of solid oxygen

Main article: Solid oxygen

There are 6 known distinct phases of solid oxygen. One of them is a dark-red O8 cluster. When oxygen is
subjected to a pressure of 96 GPa, it becomes metallic, in a similar manner as hydrogen,[8] and becomes
more similar to the heavier chalcogens, such as tellurium and polonium, both of which show significant
metallic character. At very low temperatures, this phase also becomes superconducting.

Physical properties

See also: Liquid oxygen and solid oxygen

463
Oxygen is more soluble in water than nitrogen is; water contains approximately 1 molecule of O2 for
every 2 molecules of N2, compared to an atmospheric ratio of approximately 1:4. The solubility of oxygen
in water is temperature-dependent, and about twice as much (14.6 mg·L−1) dissolves at 0 °C than at 20 °C
(7.6 mg·L−1).[25][26] At 25 °C and 1 standard atmosphere (101.3 kPa) of air, freshwater contains about
6.04 milliliters (mL) of oxygen per liter, whereas seawater contains about 4.95 mL per liter.[27] At 5 °C
the solubility increases to 9.0 mL (50% more than at 25 °C) per liter for water and 7.2 mL (45% more)
per liter for sea water.

Oxygen condenses at 90.20 K (−182.95 °C, −297.31 °F), and freezes at 54.36 K (−218.79 °C,
−361.82 °F).[28] Both liquid and solid O2 are clear substances with a light sky-blue color caused by
absorption in the red (in contrast with the blue color of the sky, which is due to Rayleigh scattering of
blue light). High-purity liquid O2 is usually obtained by the fractional distillation of liquefied air;[29]
Liquid oxygen may also be produced by condensation out of air, using liquid nitrogen as a coolant. It is a
highly reactive substance and must be segregated from combustible materials.[30]

Isotopes and stellar origin

Main article: Isotopes of oxygen

18
O in the He-shell.

Naturally occurring oxygen is composed of three stable isotopes, 16O, 17O, and 18O, with 16O being the
most abundant (99.762% natural abundance).[31]

Most 16O is synthesized at the end of the helium fusion process in massive stars but some is made in the
neon burning process.[32] 17O is primarily made by the burning of hydrogen into helium during the CNO
cycle, making it a common isotope in the hydrogen burning zones of stars.[32] Most 18O is produced when
14
N (made abundant from CNO burning) captures a 4He nucleus, making 18O common in the helium-rich
zones of evolved, massive stars.[32]

Fourteen radioisotopes have been characterized, the most stable being 15O with a half-life of
122.24 seconds (s) and 14O with a half-life of 70.606 s.[31] All of the remaining radioactive isotopes have
half-lives that are less than 27 s and the majority of these have half-lives that are less than
83 milliseconds.[31] The most common decay mode of the isotopes lighter than 16O is β+ decay[33][34][35] to

464
18
yield nitrogen, and the most common mode for the isotopes heavier than O is beta decay to yield
fluorine.[31]

Occurrence

See also: Silicate minerals and Category:Oxide minerals

Oxygen is the most abundant chemical element, by mass, in our biosphere, air, sea and land. Oxygen is
the third most abundant chemical element in the universe, after hydrogen and helium. [1] About 0.9% of
the Sun's mass is oxygen.[4] Oxygen constitutes 49.2% of the Earth's crust by mass[2] and is the major
component of the world's oceans (88.8% by mass).[4] Oxygen gas is the second most common component
of the Earth's atmosphere, taking up 21.0% of its volume and 23.1% of its mass (some 1015
tonnes).[4][36][37] Earth is unusual among the planets of the Solar System in having such a high
concentration of oxygen gas in its atmosphere: Mars (with 0.1% O2 by volume) and Venus have far lower
concentrations. However, the O2 surrounding these other planets is produced solely by ultraviolet
radiation impacting oxygen-containing molecules such as carbon dioxide.

Cold water holds more dissolved O2.

The unusually high concentration of oxygen gas on Earth is the result of the oxygen cycle. This
biogeochemical cycle describes the movement of oxygen within and between its three main reservoirs on
Earth: the atmosphere, the biosphere, and the lithosphere. The main driving factor of the oxygen cycle is
photosynthesis, which is responsible for modern Earth's atmosphere. Photosynthesis releases oxygen into
the atmosphere, while respiration and decay remove it from the atmosphere. In the present equilibrium,
production and consumption occur at the same rate of roughly 1/2000th of the entire atmospheric oxygen
per year.

Free oxygen also occurs in solution in the world's water bodies. The increased solubility of O2 at lower
temperatures (see Physical properties) has important implications for ocean life, as polar oceans support a
much higher density of life due to their higher oxygen content.[38] Polluted water may have reduced
amounts of O2 in it, depleted by decaying algae and other biomaterials (see eutrophication). Scientists
assess this aspect of water quality by measuring the water's biochemical oxygen demand, or the amount of
O2 needed to restore it to a normal concentration.[39]

Biological role

Main article: Dioxygen in biological reactions

465
Photosynthesis and respiration

Photosynthesis splits water to liberate O2 and fixes CO2 into sugar.

In nature, free oxygen is produced by the light-driven splitting of water during oxygenic photosynthesis.
Green algae and cyanobacteria in marine environments provide about 70% of the free oxygen produced
on earth and the rest is produced by terrestrial plants.[40]

A simplified overall formula for photosynthesis is:[41]

6 CO2 + 6 H2O + photons → C6H12O6 + 6 O2 (or simply carbon dioxide + water + sunlight →
glucose + dioxygen)

Photolytic oxygen evolution occurs in the thylakoid membranes of photosynthetic organisms and requires
the energy of four photons.[42] Many steps are involved, but the result is the formation of a proton gradient
across the thylakoid membrane, which is used to synthesize ATP via photophosphorylation.[43] The O2
remaining after oxidation of the water molecule is released into the atmosphere.[44]

Molecular dioxygen, O2, is essential for cellular respiration in all aerobic organisms. Oxygen is used in
mitochondria to help generate adenosine triphosphate (ATP) during oxidative phosphorylation. The
reaction for aerobic respiration is essentially the reverse of photosynthesis and is simplified as:

C6H12O6 + 6 O2 → 6 CO2 + 6 H2O + 2880 kJ·mol−1

In vertebrates, O2 is diffused through membranes in the lungs and into red blood cells. Hemoglobin binds
O2, changing its color from bluish red to bright red.[19][45] Other animals use hemocyanin (molluscs and
some arthropods) or hemerythrin (spiders and lobsters).[36] A liter of blood can dissolve 200 cm3 of O2.[36]

Reactive oxygen species, such as superoxide ion (O−


2) and hydrogen peroxide (H2O2), are dangerous by-products of oxygen use in organisms.[36] Parts of the

466
immune system of higher organisms, however, create peroxide, superoxide, and singlet oxygen to destroy
invading microbes. Reactive oxygen species also play an important role in the hypersensitive response of
plants against pathogen attack.[43]

An adult human in rest inhales 1.8 to 2.4 grams of oxygen per minute.[46] This amounts to more than 6
billion tonnes of oxygen inhaled by humanity per year.[47]

Build-up in the atmosphere

Main articles: Geological history of oxygen and Biological role of oxygen

O2 build-up in Earth's atmosphere: 1) no O2 produced; 2) O2 produced, but absorbed in oceans & seabed
rock; 3) O2 starts to gas out of the oceans, but is absorbed by land surfaces and formation of ozone layer;
4–5) O2 sinks filled and the gas accumulates

Free oxygen gas was almost nonexistent in Earth's atmosphere before photosynthetic archaea and bacteria
evolved. Free oxygen first appeared in significant quantities during the Paleoproterozoic eon (between 2.5
and 1.6 billion years ago). At first, the oxygen combined with dissolved iron in the oceans to form banded
iron formations. Free oxygen started to gas out of the oceans 2.7 billion years ago, reaching 10% of its
present level around 1.7 billion years ago.[48]

The presence of large amounts of dissolved and free oxygen in the oceans and atmosphere may have
driven most of the anaerobic organisms then living to extinction during the Great Oxygenation
Event(oxygen catastrophe) about 2.4 billion years ago. However, cellular respiration using O2 enables
aerobic organisms to produce much more ATP than anaerobic organisms, helping the former to dominate
Earth's biosphere.[49] Photosynthesis and cellular respiration of O2 allowed for the evolution of eukaryotic
cells and ultimately complex multicellular organisms such as plants and animals.

Since the beginning of the Cambrian period 540 million years ago, O2 levels have fluctuated between
15% and 30% by volume.[50] Towards the end of the Carboniferous period (about 300 million years ago)
atmospheric O2 levels reached a maximum of 35% by volume,[50] which may have contributed to the large
size of insects and amphibians at this time.[51] Human activities, including the burning of 7 billion tonnes
of fossil fuels each year have had very little effect on the amount of free oxygen in the atmosphere. [11] At
the current rate of photosynthesis it would take about 2,000 years to regenerate the entire O2 in the present
atmosphere.[52]

History

467
Early experiments

Philo's experiment inspired later investigators.

One of the first known experiments on the relationship between combustion and air was conducted by the
2nd century BCE Greek writer on mechanics, Philo of Byzantium. In his work Pneumatica, Philo
observed that inverting a vessel over a burning candle and surrounding the vessel's neck with water
resulted in some water rising into the neck.[53] Philo incorrectly surmised that parts of the air in the vessel
were converted into the classical element fire and thus were able to escape through pores in the glass.
Many centuries later Leonardo da Vinci built on Philo's work by observing that a portion of air is
consumed during combustion and respiration.[54]

In the late 17th century, Robert Boyle proved that air is necessary for combustion. English chemist John
Mayow refined this work by showing that fire requires only a part of air that he called spiritus nitroaereus
or just nitroaereus.[55] In one experiment he found that placing either a mouse or a lit candle in a closed
container over water caused the water to rise and replace one-fourteenth of the air's volume before
extinguishing the subjects.[56] From this he surmised that nitroaereus is consumed in both respiration and
combustion.

Mayow observed that antimony increased in weight when heated, and inferred that the nitroaereus must
have combined with it.[55] He also thought that the lungs separate nitroaereus from air and pass it into the
blood and that animal heat and muscle movement result from the reaction of nitroaereus with certain
substances in the body.[55] Accounts of these and other experiments and ideas were published in 1668 in
his work Tractatus duo in the tract "De respiratione".[56]

Phlogiston theory

Main article: Phlogiston theory

468
Stahl helped develop and popularize the phlogiston theory.

Robert Hooke, Ole Borch, Mikhail Lomonosov, and Pierre Bayen all produced oxygen in experiments in
the 17th and the 18th century but none of them recognized it as an element.[25] This may have been in part
due to the prevalence of the philosophy of combustion and corrosion called the phlogiston theory, which
was then the favored explanation of those processes.

Established in 1667 by the German alchemist J. J. Becher, and modified by the chemist Georg Ernst Stahl
by 1731,[57] phlogiston theory stated that all combustible materials were made of two parts. One part,
called phlogiston, was given off when the substance containing it was burned, while the dephlogisticated
part was thought to be its true form, or calx.[54]

Highly combustible materials that leave little residue, such as wood or coal, were thought to be made
mostly of phlogiston; whereas non-combustible substances that corrode, such as iron, contained very
little. Air did not play a role in phlogiston theory, nor were any initial quantitative experiments conducted
to test the idea; instead, it was based on observations of what happens when something burns, that most
common objects appear to become lighter and seem to lose something in the process. [54] The fact that a
substance like wood actually gains overall weight in burning was hidden by the buoyancy of the gaseous
combustion products. Indeed one of the first clues that the phlogiston theory was incorrect was that
metals, too, gain weight in rusting (when they were supposedly losing phlogiston).

469
Carl Wilhelm Scheele beat Priestley to the discovery but published afterwards.

Discovery

Oxygen was first discovered by Swedish pharmacist Carl Wilhelm Scheele. He had produced oxygen gas
by heating mercuric oxide and various nitrates by about 1772.[4][54] Scheele called the gas "fire air"
because it was the only known supporter of combustion, and wrote an account of this discovery in a
manuscript he titled Treatise on Air and Fire, which he sent to his publisher in 1775. However, that
document was not published until 1777.[58]

Joseph Priestley is usually given priority in the discovery.

In the meantime, on August 1, 1774, an experiment conducted by the British clergyman Joseph Priestley
focused sunlight on mercuric oxide (HgO) inside a glass tube, which liberated a gas he named
"dephlogisticated air".[4] He noted that candles burned brighter in the gas and that a mouse was more
active and lived longer while breathing it. After breathing the gas himself, he wrote: "The feeling of it to
my lungs was not sensibly different from that of common air, but I fancied that my breast felt peculiarly
light and easy for some time afterwards."[25] Priestley published his findings in 1775 in a paper titled "An
Account of Further Discoveries in Air" which was included in the second volume of his book titled
Experiments and Observations on Different Kinds of Air.[54][59] Because he published his findings first,
Priestley is usually given priority in the discovery.

The noted French chemist Antoine Laurent Lavoisier later claimed to have discovered the new substance
independently. However, Priestley visited Lavoisier in October 1774 and told him about his experiment
and how he liberated the new gas. Scheele also posted a letter to Lavoisier on September 30, 1774 that
described his own discovery of the previously unknown substance, but Lavoisier never acknowledged
receiving it (a copy of the letter was found in Scheele's belongings after his death).[58]

Lavoisier's contribution

What Lavoisier did indisputably do (although this was disputed at the time) was to conduct the first
adequate quantitative experiments on oxidation and give the first correct explanation of how combustion

470
works.[4] He used these and similar experiments, all started in 1774, to discredit the phlogiston theory and
to prove that the substance discovered by Priestley and Scheele was a chemical element.

Antoine Lavoisier discredited the Phlogiston theory.

In one experiment, Lavoisier observed that there was no overall increase in weight when tin and air were
heated in a closed container.[4] He noted that air rushed in when he opened the container, which indicated
that part of the trapped air had been consumed. He also noted that the tin had increased in weight and that
increase was the same as the weight of the air that rushed back in. This and other experiments on
combustion were documented in his book Sur la combustion en général, which was published in 1777.[4]
In that work, he proved that air is a mixture of two gases; 'vital air', which is essential to combustion and
respiration, and azote (Gk. ἄζωτον "lifeless"), which did not support either. Azote later became nitrogen
in English, although it has kept the name in French and several other European languages.[4]

Lavoisier renamed 'vital air' to oxygène in 1777 from the Greek roots ὀξύς (oxys) (acid, literally "sharp,"
from the taste of acids) and -γενής (-genēs) (producer, literally begetter), because he mistakenly believed
that oxygen was a constituent of all acids.[6] Chemists (notably Sir Humphrey Davy) eventually
determined that Lavoisier was wrong in this regard (it is in fact hydrogen that forms the basis for acid
chemistry), but by that time it was too late, the name had taken.

Oxygen entered the English language despite opposition by English scientists and the fact that the
Englishman Priestley had first isolated the gas and written about it. This is partly due to a poem praising
the gas titled "Oxygen" in the popular book The Botanic Garden (1791) by Erasmus Darwin, grandfather
of Charles Darwin.[58]

Later history

471
Robert H. Goddard and a liquid oxygen-gasoline rocket

John Dalton's original atomic hypothesis assumed that all elements were monoatomic and that the atoms
in compounds would normally have the simplest atomic ratios with respect to one another. For example,
Dalton assumed that water's formula was HO, giving the atomic mass of oxygen as 8 times that of
hydrogen, instead of the modern value of about 16.[60] In 1805, Joseph Louis Gay-Lussac and Alexander
von Humboldt showed that water is formed of two volumes of hydrogen and one volume of oxygen; and
by 1811 Amedeo Avogadro had arrived at the correct interpretation of water's composition, based on
what is now called Avogadro's law and the assumption of diatomic elemental molecules.[61][62]

By the late 19th century scientists realized that air could be liquefied, and its components isolated, by
compressing and cooling it. Using a cascade method, Swiss chemist and physicist Raoul Pierre Pictet
evaporated liquid sulfur dioxide in order to liquefy carbon dioxide, which in turn was evaporated to cool
oxygen gas enough to liquefy it. He sent a telegram on December 22, 1877 to the French Academy of
Sciences in Paris announcing his discovery of liquid oxygen.[63] Just two days later, French physicist
Louis Paul Cailletet announced his own method of liquefying molecular oxygen.[63] Only a few drops of
the liquid were produced in either case so no meaningful analysis could be conducted. Oxygen was
liquified in stable state for the first time on March 29, 1883 by Polish scientists from Jagiellonian
University, Zygmunt Wróblewski and Karol Olszewski.[64]

In 1891 Scottish chemist James Dewar was able to produce enough liquid oxygen to study.[11] The first
commercially viable process for producing liquid oxygen was independently developed in 1895 by
German engineer Carl von Linde and British engineer William Hampson. Both men lowered the
temperature of air until it liquefied and then distilled the component gases by boiling them off one at a
time and capturing them.[65] Later, in 1901, oxyacetylene welding was demonstrated for the first time by
burning a mixture of acetylene and compressed O2. This method of welding and cutting metal later
became common.[65]

In 1923 the American scientist Robert H. Goddard became the first person to develop a rocket engine; the
engine used gasoline for fuel and liquid oxygen as the oxidizer. Goddard successfully flew a small liquid-
fueled rocket 56 m at 97 km/h on March 16, 1926 in Auburn, Massachusetts, USA.[65][66]

472
Industrial production

See also: Air separation, Oxygen evolution, and fractional distillation

Two major methods are employed to produce 100 million tonnes of O2 extracted from air for industrial
uses annually.[58] The most common method is to fractionally distill liquefied air into its various
components, with nitrogen N2 distilling as a vapor while oxygen O2 is left as a liquid.[58]

Hoffman electrolysis apparatus used in electrolysis of water.

The other major method of producing O2 gas involves passing a stream of clean, dry air through one bed
of a pair of identical zeolite molecular sieves, which absorbs the nitrogen and delivers a gas stream that is
90% to 93% O2.[58] Simultaneously, nitrogen gas is released from the other nitrogen-saturated zeolite bed,
by reducing the chamber operating pressure and diverting part of the oxygen gas from the producer bed
through it, in the reverse direction of flow. After a set cycle time the operation of the two beds is
interchanged, thereby allowing for a continuous supply of gaseous oxygen to be pumped through a
pipeline. This is known as pressure swing adsorption. Oxygen gas is increasingly obtained by these non-
cryogenic technologies (see also the related vacuum swing adsorption).[67]

Oxygen gas can also be produced through electrolysis of water into molecular oxygen and hydrogen. DC
electricity must be used: if AC is used, the gases in each limb consist of hydrogen and oxygen in the
explosive ratio 2:1. Contrary to popular belief, the 2:1 ratio observed in the DC electrolysis of acidified
water does not prove that the empirical formula of water is H2O unless certain assumptions are made
about the molecular formulae of hydrogen and oxygen themselves.

A similar method is the electrocatalytic O2 evolution from oxides and oxoacids. Chemical catalysts can be
used as well, such as in chemical oxygen generators or oxygen candles that are used as part of the life-
support equipment on submarines, and are still part of standard equipment on commercial airliners in case

473
of depressurization emergencies. Another air separation technology involves forcing air to dissolve
through ceramic membranes based on zirconium dioxide by either high pressure or an electric current, to
produce nearly pure O2 gas.[39]

In large quantities, the price of liquid oxygen in 2001 was approximately $0.21/kg.[68] Since the primary
cost of production is the energy cost of liquefying the air, the production cost will change as energy cost
varies.

For reasons of economy, oxygen is often transported in bulk as a liquid in specially insulated tankers,
since one litre of liquefied oxygen is equivalent to 840 liters of gaseous oxygen at atmospheric pressure
and 20 °C.[58] Such tankers are used to refill bulk liquid oxygen storage containers, which stand outside
hospitals and other institutions with a need for large volumes of pure oxygen gas. Liquid oxygen is passed
through heat exchangers, which convert the cryogenic liquid into gas before it enters the building.
Oxygen is also stored and shipped in smaller cylinders containing the compressed gas; a form that is
useful in certain portable medical applications and oxy-fuel welding and cutting.[58]

Applications

See also: Breathing gas, Redox, and Combustion

Medical

An oxygen concentrator in an emphysema patient's house

Main article: Oxygen therapy

Uptake of O2 from the air is the essential purpose of respiration, so oxygen supplementation is used in
medicine. Treatment not only increases oxygen levels in the patient's blood, but has the secondary effect
of decreasing resistance to blood flow in many types of diseased lungs, easing work load on the heart.
Oxygen therapy is used to treat emphysema, pneumonia, some heart disorders (congestive heart failure),
some disorders that cause increased pulmonary artery pressure, and any disease that impairs the body's
ability to take up and use gaseous oxygen.[69]

474
Treatments are flexible enough to be used in hospitals, the patient's home, or increasingly by portable
devices. Oxygen tents were once commonly used in oxygen supplementation, but have since been
replaced mostly by the use of oxygen masks or nasal cannulas.[70]

Hyperbaric (high-pressure) medicine uses special oxygen chambers to increase the partial pressure of O2
around the patient and, when needed, the medical staff.[71] Carbon monoxide poisoning, gas gangrene, and
decompression sickness (the 'bends') are sometimes treated using these devices.[72] Increased O2
concentration in the lungs helps to displace carbon monoxide from the heme group of hemoglobin.[73][74]
Oxygen gas is poisonous to the anaerobic bacteria that cause gas gangrene, so increasing its partial
pressure helps kill them.[75][76] Decompression sickness occurs in divers who decompress too quickly after
a dive, resulting in bubbles of inert gas, mostly nitrogen and helium, forming in their blood. Increasing
the pressure of O2 as soon as possible is part of the treatment.[69][77][78]

Oxygen is also used medically for patients who require mechanical ventilation, often at concentrations
above the 21% found in ambient air.

Life support and recreational use

File:Wisoff on the Arm — GPN-2000-001069.jpg

Low pressure pure O2 is used in space suits.

A notable application of O2 as a low-pressure breathing gas is in modern space suits, which surround their
occupant's body with pressurized air. These devices use nearly pure oxygen at about one third normal
pressure, resulting in a normal blood partial pressure of O2.[79][80] This trade-off of higher oxygen
concentration for lower pressure is needed to maintain flexible spacesuits.

Scuba divers and submariners also rely on artificially delivered O2, but most often use normal pressure,
and/or mixtures of oxygen and air. Pure or nearly pure O2 use in diving at higher-than-sea-level pressures
is usually limited to rebreather, decompression, or emergency treatment use at relatively shallow depths
(~6 meters depth, or less).[81][82] Deeper diving requires significant dilution of O2 with other gases, such as
nitrogen or helium, to help prevent oxygen toxicity.[81]

People who climb mountains or fly in non-pressurized fixed-wing aircraft sometimes have supplemental
O2 supplies.[83] Passengers traveling in (pressurized) commercial airplanes have an emergency supply of
O2 automatically supplied to them in case of cabin depressurization. Sudden cabin pressure loss activates
chemical oxygen generators above each seat, causing oxygen masks to drop. Pulling on the masks "to
start the flow of oxygen" as cabin safety instructions dictate, forces iron filings into the sodium chlorate
inside the canister.[39] A steady stream of oxygen gas is then produced by the exothermic reaction.

Oxygen, as a supposed mild euphoric, has a history of recreational use in oxygen bars and in sports.
Oxygen bars are establishments, found in Japan, California, and Las Vegas, Nevada since the late 1990s
that offer higher than normal O2 exposure for a fee.[84] Professional athletes, especially in American
football, also sometimes go off field between plays to wear oxygen masks in order to get a "boost" in
performance. The pharmacological effect is doubtful; a placebo effect is a more likely explanation.[84]
Available studies support a performance boost from enriched O2 mixtures only if they are breathed during
actual aerobic exercise.[85]

Other recreational uses that do not involve breathing the gas include pyrotechnic applications, such as
George Goble's five-second ignition of barbecue grills.[86]

475
Industrial

Most commercially produced O2 is used to smelt iron into steel.

Smelting of iron ore into steel consumes 55% of commercially produced oxygen.[39] In this process, O2 is
injected through a high-pressure lance into molten iron, which removes sulfur impurities and excess
carbon as the respective oxides, SO2 and CO2. The reactions are exothermic, so the temperature increases
to 1,700 °C.[39]

Another 25% of commercially produced oxygen is used by the chemical industry.[39] Ethylene is reacted
with O2 to create ethylene oxide, which, in turn, is converted into ethylene glycol; the primary feeder
material used to manufacture a host of products, including antifreeze and polyester polymers (the
precursors of many plastics and fabrics).[39]

Most of the remaining 20% of commercially produced oxygen is used in medical applications, metal
cutting and welding, as an oxidizer in rocket fuel, and in water treatment.[39] Oxygen is used in
oxyacetylene welding burning acetylene with O2 to produce a very hot flame. In this process, metal up to
60 cm thick is first heated with a small oxy-acetylene flame and then quickly cut by a large stream of
O2.[87] Larger rockets use liquid oxygen as their oxidizer, which is mixed and ignited with the fuel for
propulsion.

Scientific

476
500 million years of climate change vs 18O

Paleoclimatologists measure the ratio of oxygen-18 and oxygen-16 in the shells and skeletons of marine
organisms to determine what the climate was like millions of years ago (see oxygen isotope ratio cycle).
Seawater molecules that contain the lighter isotope, oxygen-16, evaporate at a slightly faster rate than
water molecules containing the 12% heavier oxygen-18; this disparity increases at lower temperatures.[88]
During periods of lower global temperatures, snow and rain from that evaporated water tends to be higher
in oxygen-16, and the seawater left behind tends to be higher in oxygen-18. Marine organisms then
incorporate more oxygen-18 into their skeletons and shells than they would in a warmer climate.[88]
Paleoclimatologists also directly measure this ratio in the water molecules of ice core samples that are up
to several hundreds of thousands of years old.

Planetary geologists have measured different abundances of oxygen isotopes in samples from the Earth,
the Moon, Mars, and meteorites, but were long unable to obtain reference values for the isotope ratios in
the Sun, believed to be the same as those of the primordial solar nebula. However, analysis of a silicon
wafer exposed to the solar wind in space and returned by the crashed Genesis spacecraft has shown that
the Sun has a higher proportion of oxygen-16 than does the Earth. The measurement implies that an
unknown process depleted oxygen-16 from the Sun's disk of protoplanetary material prior to the
coalescence of dust grains that formed the Earth.[89]

Oxygen presents two spectrophotometric absorption bands peaking at the wavelengths 687 and 760 nm.
Some remote sensing scientists have proposed using the measurement of the radiance coming from
vegetation canopies in those bands to characterize plant health status from a satellite platform.[90] This
approach exploits the fact that in those bands it is possible to discriminate the vegetation's reflectance
from its fluorescence, which is much weaker. The measurement is technically difficult owing to the low
signal-to-noise ratio and the physical structure of vegetation; but it has been proposed as a possible
method of monitoring the carbon cycle from satellites on a global scale.

Compounds

Main article: Compounds of oxygen

Water (H2O) is the most familiar oxygen compound.

477
The oxidation state of oxygen is −2 in almost all known compounds of oxygen. The oxidation state −1 is
found in a few compounds such as peroxides.[91] Compounds containing oxygen in other oxidation states
are very uncommon: −1/2 (superoxides), −1/3 (ozonides), 0 (elemental, hypofluorous acid), +1/2
(dioxygenyl), +1 (dioxygen difluoride), and +2 (oxygen difluoride).

Oxides and other inorganic compounds

Water (H2O) is the oxide of hydrogen and the most familiar oxygen compound. Hydrogen atoms are
covalently bonded to oxygen in a water molecule but also have an additional attraction (about
23.3 kJ·mol−1 per hydrogen atom) to an adjacent oxygen atom in a separate molecule. [92] These hydrogen
bonds between water molecules hold them approximately 15% closer than what would be expected in a
simple liquid with just van der Waals forces.[93][94]

Oxides, such as iron oxide or rust form when oxygen combines with other elements.

Due to its electronegativity, oxygen forms chemical bonds with almost all other elements at elevated
temperatures to give corresponding oxides. However, some elements readily form oxides at standard
conditions for temperature and pressure; the rusting of iron is an example. The surface of metals like
aluminium and titanium are oxidized in the presence of air and become coated with a thin film of oxide
that passivates the metal and slows further corrosion. Some of the transition metal oxides are found in
nature as non-stoichiometric compounds, with a slightly less metal than the chemical formula would
show. For example, the natural occurring FeO (wüstite) is actually written as Fe1 − xO, where x is usually
around 0.05.[95]

Oxygen as a compound is present in the atmosphere in trace quantities in the form of carbon dioxide
(CO2). The earth's crustal rock is composed in large part of oxides of silicon (silica SiO2, found in granite
and sand), aluminium (aluminium oxide Al2O3, in bauxite and corundum), iron (iron(III) oxide Fe2O3, in
hematite and rust) and other metals.

The rest of the Earth's crust is also made of oxygen compounds, in particular calcium carbonate (in
limestone) and silicates (in feldspars). Water-soluble silicates in the form of Na4SiO4, Na2SiO3, and
Na2Si2O5 are used as detergents and adhesives.[96]

Oxygen also acts as a ligand for transition metals, forming metal–O2 bonds with the iridium atom in
Vaska's complex,[97] with the platinum in PtF6,[98] and with the iron center of the heme group of
hemoglobin.

478
Organic compounds and biomolecules

Acetone is an important feeder material in the chemical industry.

Oxygen

Carbon

Hydrogen

Oxygen represents more than 40% of the molecular mass of the ATP molecule.

Among the most important classes of organic compounds that contain oxygen are (where "R" is an
organic group): alcohols (R-OH); ethers (R-O-R); ketones (R-CO-R); aldehydes (R-CO-H); carboxylic
acids (R-COOH); esters (R-COO-R); acid anhydrides (R-CO-O-CO-R); and amides (R-C(O)-NR2). There
are many important organic solvents that contain oxygen, including: acetone, methanol, ethanol,
isopropanol, furan, THF, diethyl ether, dioxane, ethyl acetate, DMF, DMSO, acetic acid, and formic acid.
Acetone ((CH3)2CO) and phenol (C6H5OH) are used as feeder materials in the synthesis of many different
substances. Other important organic compounds that contain oxygen are: glycerol, formaldehyde,
glutaraldehyde, citric acid, acetic anhydride, and acetamide. Epoxides are ethers in which the oxygen
atom is part of a ring of three atoms.

Oxygen reacts spontaneously with many organic compounds at or below room temperature in a process
called autoxidation.[99] Most of the organic compounds that contain oxygen are not made by direct action
of O2. Organic compounds important in industry and commerce that are made by direct oxidation of a
precursor include ethylene oxide and peracetic acid.[96]

479
The element is found in almost all biomolecules that are important to (or generated by) life. Only a few
common complex biomolecules, such as squalene and the carotenes, contain no oxygen. Of the organic
compounds with biological relevance, carbohydrates contain the largest proportion by mass of oxygen.
All fats, fatty acids, amino acids, and proteins contain oxygen (due to the presence of carbonyl groups in
these acids and their ester residues). Oxygen also occurs in phosphate (PO3−
4) groups in the biologically important energy-carrying molecules ATP and ADP, in the backbone and the
purines (except adenine) and pyrimidines of RNA and DNA, and in bones as calcium phosphate and
hydroxylapatite.

Safety and precautions

Toxicity

Main article: Oxygen toxicity

Main symptoms of oxygen toxicity[100]

480
Oxygen toxicity occurs when the lungs take in a higher than normal O2 partial pressure, which can occur
in deep scuba diving.

Oxygen gas (O2) can be toxic at elevated partial pressures, leading to convulsions and other health
problems.[81][101][102] Oxygen toxicity usually begins to occur at partial pressures more than 50
kilopascals (kPa), or 2.5 times the normal sea-level O2 partial pressure of about 21 kPa (equal to about
50% oxygen composition at standard pressure). This is not a problem except for patients on mechanical
ventilators, since gas supplied through oxygen masks in medical applications is typically composed of
only 30%–50% O2 by volume (about 30 kPa at standard pressure).[25] (although this figure also is subject
to wide variation, depending on type of mask).

At one time, premature babies were placed in incubators containing O2-rich air, but this practice was
discontinued after some babies were blinded by it.[25][103]

Breathing pure O2 in space applications, such as in some modern space suits, or in early spacecraft such
as Apollo, causes no damage due to the low total pressures used.[79][104] In the case of spacesuits, the O2
partial pressure in the breathing gas is, in general, about 30 kPa (1.4 times normal), and the resulting O2
partial pressure in the astronaut's arterial blood is only marginally more than normal sea-level O2 partial
pressure (see arterial blood gas).

Oxygen toxicity to the lungs and central nervous system can also occur in deep scuba diving and surface
supplied diving.[25][81] Prolonged breathing of an air mixture with an O2 partial pressure more than 60 kPa
can eventually lead to permanent pulmonary fibrosis.[105] Exposure to a O2 partial pressures greater than
160 kPa (about 1.6 atm) may lead to convulsions (normally fatal for divers). Acute oxygen toxicity
(causing seizures, its most feared effect for divers) can occur by breathing an air mixture with 21% O2 at
66 m or more of depth; the same thing can occur by breathing 100% O2 at only 6 m.[105][106][107][108]

Combustion and other hazards

Pure O2 at higher than normal pressure and a spark led to a fire and the loss of the Apollo 1 crew.

Highly concentrated sources of oxygen promote rapid combustion. Fire and explosion hazards exist when
concentrated oxidants and fuels are brought into close proximity; however, an ignition event, such as heat
or a spark, is needed to trigger combustion.[109] Oxygen itself is not the fuel, but the oxidant. Combustion

481
hazards also apply to compounds of oxygen with a high oxidative potential, such as peroxides, chlorates,
nitrates, perchlorates, and dichromates because they can donate oxygen to a fire.

Concentrated O2 will allow combustion to proceed rapidly and energetically.[109] Steel pipes and storage
vessels used to store and transmit both gaseous and liquid oxygen will act as a fuel; and therefore the
design and manufacture of O2 systems requires special training to ensure that ignition sources are
minimized.[109] The fire that killed the Apollo 1 crew in a launch pad test spread so rapidly because the
capsule was pressurized with pure O2 but at slightly more than atmospheric pressure, instead of the 1⁄3
normal pressure that would be used in a mission.[110][111]

Liquid oxygen spills, if allowed to soak into organic matter, such as wood, petrochemicals, and asphalt
can cause these materials to detonate unpredictably on subsequent mechanical impact.[109] As with other
cryogenic liquids, on contact with the human body it can cause frostbites to the skin and the eyes.

Nitrogen

From Wikipedia, the free encyclopedia

Jump to: navigation, search

carbon ← nitrogen → oxygen

-

N

P

7N

Periodic table

Appearance

colorless gas, liquid or solid

482
Spectral lines of Nitrogen

General properties

Name, symbol, number nitrogen, N, 7

Pronunciation /ˈnaɪtrɵdʒɪn/ NYE-tro-jin

Element category nonmetal

Group, period, block 15, 2, p

Standard atomic weight 14.0067g·mol−1

Electron configuration 1s2 2s2 2p3

Electrons per shell 2, 5 (Image)

Physical properties

Phase gas

(0 °C, 101.325 kPa)


Density
1.251 g/L

Liquid density at b.p. 0.808 g·cm−3

483
Melting point 63.153 K-210.00 ° ,C-346.00 ° ,F

Boiling point 77.36 K-195.79 ° ,C-320.3342 ° ,F

Triple point 63.1526 K (-210°C), 12.53 kPa

Critical point 126.19 K, 3.3978 MPa

Heat of fusion (N2) 0.72 kJ·mol−1

Heat of vaporization (N2) 5.56 kJ·mol−1

(25 °C) (N2)


Specific heat capacity −1 −1
29.124 J·mol ·K

Vapor pressure

P (Pa) 1 10 100 1k 10 k 100 k

at T (K) 37 41 46 53 62 77

Atomic properties

5, 4, 3, 2, 1, -1, -2, -3
Oxidation states
(strongly acidic oxide)

Electronegativity 3.04 (Pauling scale)

Ionization energies 1st: 1402.3 kJ·mol−1


(more)
2nd: 2856 kJ·mol−1

3rd: 4578.1 kJ·mol−1

Covalent radius 71±1 pm

Van der Waals radius 155 pm

Miscellanea

484
Crystal structure hexagonal

Magnetic ordering diamagnetic

Thermal conductivity (300 K) 25.83 × 10−3 W·m−1·K−1

Speed of sound (gas, 27 °C) 353 m/s

CAS registry number 7727-37-9

Most stable isotopes

Main article: Isotopes of nitrogen

iso NA half-life DM DE (MeV) DP


13
N syn 9.965 min ε 2.220 13
C
14 14
N 99.634% N is stable with 7 neutrons
15 15
N 0.366% N is stable with 8 neutrons

v·d·e

Nitrogen ( /ˈnaɪtrɵdʒɪn/ NYE-tro-jin) is a chemical element that has the symbol N, atomic number of 7
and atomic mass 14.00674 u. Elemental nitrogen is a colorless, odorless, tasteless and mostly inert
diatomic gas at standard conditions, constituting 78.08% by volume of Earth's atmosphere. The element
nitrogen was discovered as a separable component of air, by Scottish physician Daniel Rutherford, in
1772.

Many industrially important compounds, such as ammonia, nitric acid, organic nitrates (propellants and
explosives), and cyanides, contain nitrogen. The extremely strong bond in elemental nitrogen dominates
nitrogen chemistry, causing difficulty for both organisms and industry in breaking the bond to convert the
N2 into useful compounds, but at the same time causing release of large amounts of often useful energy
when the compounds burn, explode, or decay back into nitrogen gas.

Nitrogen occurs in all living organisms, and the nitrogen cycle describes movement of the element from
air into the biosphere and organic compounds, then back into the atmosphere. Synthetically-produced
nitrates are key ingredients of industrial fertilizers, and also key pollutants in causing the eutrophication
of water systems. Nitrogen is a constituent element of amino acids and thus of proteins, and of nucleic
acids (DNA and RNA). It resides in the chemical structure of almost all neurotransmitters, and is a
defining component of alkaloids, biological molecules produced by many organisms.

485
Contents

[hide]

 1 History
 2 Properties
o 2.1 Isotopes
o 2.2 Electromagnetic spectrum
o 2.3 Reactions
 3 Occurrence
 4 Compounds
 5 Production and applications
o 5.1 Nitrogenated beer
o 5.2 Liquid nitrogen
o 5.3 Applications of nitrogen compounds
 6 Biological role
 7 Safety
 8 See also
 9 References
 10 Further reading
 11 External links

History

Nitrogen (Latin nitrogenium, where nitrum (from Greek nitron νιτρον) means "saltpetre" (see nitre), and
genes γενης means "forming") is formally considered to have been discovered by Daniel Rutherford in
1772, who called it noxious air or fixed air.[1] The fact that there was an element of air which did not
support combustion was clear to Rutherford. Nitrogen was also studied at about the same time by Carl
Wilhelm Scheele, Henry Cavendish, and Joseph Priestley, who referred to it as burnt air or phlogisticated
air. Nitrogen gas was inert enough that Antoine Lavoisier referred to it as "mephitic air" or azote, from
the Greek word άζωτος (azotos) meaning "lifeless".[2] In it animals died and flames were extinguished.
Lavoisier's name for nitrogen is used in many languages (French, Polish, Russian, etc.) and still remains
in English in the common names of many compounds, such as hydrazine and compounds of the azide ion.

Nitrogen compounds were well known during the Middle Ages. Alchemists knew nitric acid as aqua
fortis (strong water). The mixture of nitric and hydrochloric acids was known as aqua regia (royal water),
celebrated for its ability to dissolve gold (the king of metals). The earliest military, industrial and
agricultural applications of nitrogen compounds involved uses of saltpeter (sodium nitrate or potassium
nitrate), notably in gunpowder, and later as fertilizer. In 1910, Lord Rayleigh discovered that an electrical
discharge in nitrogen gas produced "active nitrogen", an allotrope considered to be monatomic. The
"whirling cloud of brilliant yellow light" produced by his apparatus reacted with quicksilver to produce
explosive mercury nitride.[3]

Properties

Nitrogen is a nonmetal, with an electronegativity of 3.04. It has five electrons in its outer shell and is
therefore trivalent in most compounds. The triple bond in molecular nitrogen (N2) is the strongest. The
resulting difficulty of converting N2 into other compounds, and the ease (and associated high energy

486
release) of converting nitrogen compounds into elemental N2, have dominated the role of nitrogen in both
nature and human economic activities.

At atmospheric pressure molecular nitrogen condenses (liquefies) at 77 K (−195.8 °C) and freezes at 63 K
(−210.0 °C) into the beta hexagonal close-packed crystal allotropic form. Below 35.4 K (−237.6 °C)
nitrogen assumes the cubic crystal allotropic form (called the alpha-phase). Liquid nitrogen, a fluid
resembling water in appearance, but with 80.8% of the density (the density of liquid nitrogen at its boiling
point is 0.808 g/mL), is a common cryogen.

Unstable allotropes of nitrogen consisting of more than two nitrogen atoms have been produced in the
laboratory, like N3 and N4.[4] Under extremely high pressures (1.1 million atm) and high temperatures
(2000 K), as produced using a diamond anvil cell, nitrogen polymerizes into the single-bonded cubic
gauche crystal structure. This structure is similar to that of diamond, and both have extremely strong
covalent bonds. N4 is nicknamed "nitrogen diamond."[5]

Isotopes

See also: Isotopes of nitrogen

There are two stable isotopes of nitrogen: 14N and 15N. By far the most common is 14N (99.634%), which
is produced in the CNO cycle in stars. Of the ten isotopes produced synthetically, 13N has a half-life of ten
minutes and the remaining isotopes have half-lives on the order of seconds or less. Biologically mediated
reactions (e.g., assimilation, nitrification, and denitrification) strongly control nitrogen dynamics in the
soil. These reactions typically result in 15N enrichment of the substrate and depletion of the product.

14
A small part (0.73%) of the molecular nitrogen in Earth's atmosphere is the isotopologue N15N, and
almost all the rest is 14N2.

Radioisotope 16N is the dominant radionuclide in the coolant of pressurized water reactors or boiling
water reactors during normal operation. It is produced from 16O (in water) via (n,p) reaction. It has a short
half-life of about 7.1 s, but during its decay back to 16O produces high-energy gamma radiation (5 to 7
MeV).

Because of this, the access to the primary coolant piping in a pressurized water reactor must be restricted
during reactor power operation.[6] 16N is one of the main means used to immediately detect even small
leaks from the primary coolant to the secondary steam cycle.

Similarly, access to any of the steam cycle components in a boiling water reactor nuclear power plant
must be restricted during operation. Condensate from the condenser is typically retained for 10 minutes to
allow for decay of the 16N. This eliminates the need to shield and restrict access to any of the feed water
piping or pumps.

Electromagnetic spectrum

487
A 1×5 cm vial of glowing ultrapure nitrogen

Nitrogen discharge (spectrum) tube

Molecular nitrogen (14N2) is largely transparent to infrared and visible radiation because it is a
homonuclear molecule and thus has no dipole moment to couple to electromagnetic radiation at these
wavelengths. Significant absorption occurs at extreme ultraviolet wavelengths, beginning around 100
nanometers. This is associated with electronic transitions in the molecule to states in which charge is not
distributed evenly between nitrogen atoms. Nitrogen absorption leads to significant absorption of
ultraviolet radiation in the Earth's upper atmosphere and the atmospheres of other planetary bodies. For
similar reasons, pure molecular nitrogen lasers typically emit light in the ultraviolet range.

Nitrogen also makes a contribution to visible air glow from the Earth's upper atmosphere, through
electron impact excitation followed by emission. This visible blue air glow (seen in the polar aurora and
in the re-entry glow of returning spacecraft) typically results not from molecular nitrogen, but rather from
free nitrogen atoms combining with oxygen to form nitric oxide (NO).

Reactions

Structure of Dinitrogen, N2

488
Structure of [Ru(NH3)5(N2)]2+

Nitrogen is generally unreactive at standard temperature and pressure. N2 reacts spontaneously with few
reagents, being resilient to acids and bases as well as oxidants and most reductants. When nitrogen reacts
spontaneously with a reagent, the net transformation is often called nitrogen fixation.

Nitrogen reacts with elemental lithium.[7] Lithium burns in an atmosphere of N2 to give lithium nitride:

6 Li + N2 → 2 Li3N

Magnesium also burns in nitrogen, forming magnesium nitride.

3 Mg + N2 → Mg3N2

N2 forms a variety of adducts with transition metals. The first example of a dinitrogen complex is
[Ru(NH3)5(N2)]2+ (see figure at right). Such compounds are now numerous, other examples include
IrCl(N2)(PPh3)2, W(N2)2(Ph2CH2CH2PPh2)2, and [(η5-C5Me4H)2Zr]2(μ2, η²,η²-N2). These complexes
illustrate how N2 might bind to the metal(s) in nitrogenase and the catalyst for the Haber process.[8] A
catalytic process to reduce N2 to ammonia with the use of a molybdenum complex in the presence of a
proton source was published in 2005.[7]

The starting point for industrial production of nitrogen compounds is the Haber process, in which
nitrogen is fixed by reacting N2 and H2 over an iron(III) oxide (Fe3O4) catalyst at about 500 °C and 200
atmospheres pressure. Biological nitrogen fixation in free-living cyanobacteria and in the root nodules of
plants also produces ammonia from molecular nitrogen. The reaction, which is the source of the bulk of
nitrogen in the biosphere, is catalyzed by the nitrogenase enzyme complex which contains Fe and Mo
atoms, using energy derived from hydrolysis of adenosine triphosphate (ATP) into adenosine diphosphate
and inorganic phosphate (−20.5 kJ/mol).

Occurrence

489
Nitrogen is the largest single constituent of the Earth's atmosphere (78.082% by volume of dry air, 75.3%
by weight in dry air). It is created by fusion processes in stars, and is estimated to be the 7th most
abundant chemical element by mass in the universe.[9]

Molecular nitrogen and nitrogen compounds have been detected in interstellar space by astronomers using
the Far Ultraviolet Spectroscopic Explorer.[10] Molecular nitrogen is a major constituent of the Saturnian
moon Titan's thick atmosphere, and occurs in slightly appreciable to trace amounts in other planetary
atmospheres.[11]

Nitrogen is present in all living organisms, in proteins, nucleic acids and other molecules. It typically
makes up around 4% of the dry weight of plant matter, and around 3% of the weight of the human body.
It is a large component of animal waste (for example, guano), usually in the form of urea, uric acid,
ammonium compounds and derivatives of these nitrogenous products, which are essential nutrients for all
plants that cannot fix atmospheric nitrogen.

Nitrogen occurs naturally in many minerals, such as saltpetre (potassium nitrate), Chile saltpetre (sodium
nitrate) and sal ammoniac (ammonium chloride). Most of these are uncommon, partly because of the
minerals' ready solubility in water. See also Nitrate minerals and Ammonium minerals.

Compounds

See also: Category:Nitrogen compounds

The main neutral hydride of nitrogen is ammonia (NH3), although hydrazine (N2H4) is also commonly
used. Ammonia is more basic than water by 6 orders of magnitude. In solution ammonia forms the
ammonium ion (NH+
4). Liquid ammonia (boiling point 240 K) is amphiprotic (displaying either Brønsted-Lowry acidic or
basic character) and forms ammonium and the less common amide ions (NH−
2); both amides and nitride (N3−) salts are known, but decompose in water. Singly, doubly, triply and
quadruply substituted alkyl compounds of ammonia are called amines (four substitutions, to form
commercially and biologically important quaternary amines, results in a positively charged nitrogen, and
thus a water-soluble, or at least amphiphilic, compound). Larger chains, rings and structures of nitrogen
hydrides are also known, but are generally unstable.

Other classes of nitrogen anions (negatively charged ions) are the poisonous azides (N−
3), which are linear and isoelectronic to carbon dioxide, but which bind to important iron-containing
enzymes in the body in a manner more resembling cyanide. Another molecule of the same structure is the
colorless and relatively inert anesthetic gas Nitrous oxide (dinitrogen monoxide, N2O), also known as
laughing gas. This is one of a variety of nitrogen oxides that form a family often abbreviated as NOx.
Nitric oxide (nitrogen monoxide, NO), is a natural free radical used in signal transduction in both plants
and animals, for example in vasodilation by causing the smooth muscle of blood vessels to relax. The
reddish and poisonous nitrogen dioxide NO2 contains an unpaired electron and is an important component
of smog. Nitrogen molecules containing unpaired electrons show an understandable tendency to dimerize
(thus pairing the electrons), and are generally highly reactive. The corresponding acids are nitrous HNO2
and nitric acid HNO3, with the corresponding salts called nitrites and nitrates.

The higher oxides dinitrogen trioxide N2O3, dinitrogen tetroxide N2O4 and dinitrogen pentoxide N2O5, are
unstable and explosive, a consequence of the chemical stability of N2. Nearly every hypergolic rocket
engine uses N2O4 as the oxidizer; their fuels, various forms of hydrazine, are also nitrogen compounds.
These engines are extensively used on spacecraft such as the space shuttle and those of the Apollo

490
Program because their propellants are liquids at room temperature and ignition occurs on contact without
an ignition system, allowing many precisely controlled burns. Some launch vehicles, such as the Titan II
and Ariane 1 through 4 also use hypergolic fuels, although the trend is away from such engines for cost
and safety reasons. N2O4 is an intermediate in the manufacture of nitric acid HNO3, one of the few acids
stronger than hydronium and a fairly strong oxidizing agent.

Nitrogen is notable for the range of explosively unstable compounds that it can produce. Nitrogen
triiodide NI3 is an extremely sensitive contact explosive. Nitrocellulose, produced by nitration of cellulose
with nitric acid, is also known as guncotton. Nitroglycerin, made by nitration of glycerin, is the
dangerously unstable explosive ingredient of dynamite. The comparatively stable, but more powerful
explosive trinitrotoluene (TNT) is the standard explosive against which the power of nuclear explosions
are measured.

Nitrogen can also be found in organic compounds. Common nitrogen functional groups include: amines,
amides, nitro groups, imines, and enamines. The amount of nitrogen in a chemical substance can be
determined by the Kjeldahl method.

Production and applications

This section needs additional citations for verification.


Please help improve this article by adding reliable references. Unsourced material may be
challenged and removed. (September 2010)

A computer rendering of the nitrogen molecule, N2

Nitrogen gas is an industrial gas produced by the fractional distillation of liquid air, or by mechanical
means using gaseous air (i.e. pressurized reverse osmosis membrane or Pressure swing adsorption).
Commercial nitrogen is often a byproduct of air-processing for industrial concentration of oxygen for
steelmaking and other purposes. When supplied compressed in cylinders it is often called OFN (oxygen-
free nitrogen).[12]

Nitrogen gas has a variety of applications, including serving as an inert replacement for air where
oxidation is undesirable;

 As a modified atmosphere, pure or mixed with carbon dioxide, to preserve the freshness of
packaged or bulk foods (by delaying rancidity and other forms of oxidative damage)
 In ordinary incandescent light bulbs as an inexpensive alternative to argon.[13]
 The production of electronic parts such as transistors, diodes, and integrated circuits

491
 Dried and pressurized, as a dielectric gas for high voltage equipment
 The manufacturing of stainless steel[14]
 Used in military aircraft fuel systems to reduce fire hazard, (see inerting system)
 On top of liquid explosives as a safety measure
 Filling automotive and aircraft tires[15] due to its inertness and lack of moisture or oxidative
qualities, as opposed to air, though this is not necessary for consumer automobiles.[16][17]
 Used as a propellant for draught wine, and as an alternative to or together with carbon dioxide for
other beverages.

Nitrogen is commonly used during sample preparation procedures for chemical analysis. Specifically, it is
used to concentrate and reduce the volume of liquid samples. Directing a pressurized stream of nitrogen
gas perpendicular to the surface of the liquid allows the solvent to evaporate while leaving the solute(s)
and un-evaporated solvent behind.[18]

Nitrogen tanks are also replacing carbon dioxide as the main power source for paintball guns. The
downside is that nitrogen must be kept at higher pressure than CO2, making N2 tanks heavier and more
expensive.

Nitrogenated beer

A further example of its versatility is its use as a preferred alternative to carbon dioxide to pressurize kegs
of some beers, particularly stouts and British ales, due to the smaller bubbles it produces, which make the
dispensed beer smoother and headier. A modern application of a pressure sensitive nitrogen capsule
known commonly as a "widget" now allows nitrogen charged beers to be packaged in cans and bottles.[19]

A mixture of nitrogen and carbon dioxide can be used for this purpose as well, to maintain the saturation
of beer with carbon dioxide.[20]

Liquid nitrogen

Air balloon submerged into liquid nitrogen

Main article: Liquid nitrogen

492
Liquid nitrogen is a cryogenic liquid. At atmospheric pressure, it boils at −195.8 °C. When insulated in
proper containers such as Dewar flasks, it can be transported without much evaporative loss.[21]

Like dry ice, the main use of liquid nitrogen is as a refrigerant. Among other things, it is used in the
cryopreservation of blood, reproductive cells (sperm and egg), and other biological samples and materials.
It is used medically in cryotherapy to remove cysts and warts on the skin.[22] It is used in cold traps for
certain laboratory equipment and to cool X-ray detectors. It has also been used to cool central processing
units and other devices in computers which are overclocked, and which produce more heat than during
normal operation.[23]

Applications of nitrogen compounds

Molecular nitrogen (N2) in the atmosphere is relatively non-reactive due to its strong bond, and N2 plays
an inert role in the human body, being neither produced nor destroyed. In nature, nitrogen is converted
into biologically (and industrially) useful compounds by lightning, and by some living organisms, notably
certain bacteria (i.e. nitrogen fixing bacteria – see Biological role below). Molecular nitrogen is released
into the atmosphere in the process of decay, in dead plant and animal tissues.

The ability to combine or fix molecular nitrogen is a key feature of modern industrial chemistry, where
nitrogen and natural gas are converted into ammonia via the Haber process. Ammonia, in turn, can be
used directly (primarily as a fertilizer, and in the synthesis of nitrated fertilizers), or as a precursor of
many other important materials including explosives, largely via the production of nitric acid by the
Ostwald process.

The organic and inorganic salts of nitric acid have been important historically as convenient stores of
chemical energy. They include important compounds such as potassium nitrate (or saltpeter used in
gunpowder) and ammonium nitrate, an important fertilizer and explosive (see ANFO). Various other
nitrated organic compounds, such as nitroglycerin and trinitrotoluene, and nitrocellulose, are used as
explosives and propellants for modern firearms. Nitric acid is used as an oxidizing agent in liquid fueled
rockets. Hydrazine and hydrazine derivatives find use as rocket fuels and monopropellants. In most of
these compounds, the basic instability and tendency to burn or explode is derived from the fact that
nitrogen is present as an oxide, and not as the far more stable nitrogen molecule (N2) which is a product
of the compounds' thermal decomposition. When nitrates burn or explode, the formation of the powerful
triple bond in the N2 produces most of the energy of the reaction.

Nitrogen is a constituent of molecules in every major drug class in pharmacology and medicine. Nitrous
oxide (N2O) was discovered early in the 19th century to be a partial anesthetic, though it was not used as
a surgical anesthetic until later. Called "laughing gas", it was found capable of inducing a state of social
disinhibition resembling drunkenness. Other notable nitrogen-containing drugs are drugs derived from
plant alkaloids, such as morphine (there exist many alkaloids known to have pharmacological effects; in
some cases they appear natural chemical defenses of plants against predation). Drugs that contain
nitrogen include all major classes of antibiotics, and organic nitrate drugs like nitroglycerin and
nitroprusside that regulate blood pressure and heart action by mimicking the action of nitric oxide.

Biological role

See also: Nitrogen cycle and Human impacts on the nitrogen cycle

Nitrogen is an essential building block of amino and nucleic acids, essential to life on Earth.

493
Elemental nitrogen in the atmosphere cannot be used directly by either plants or animals, and must be
converted to a reduced (or 'fixed') state in order to be useful for higher plants and animals. Precipitation
often contains substantial quantities of ammonium and nitrate, thought to result from nitrogen fixation by
lightning and other atmospheric electric phenomena.[24] This was first proposed by Liebig in 1827 and
later confirmed.[24] However, because ammonium is preferentially retained by the forest canopy relative to
atmospheric nitrate, most fixed nitrogen reaches the soil surface under trees as nitrate. Soil nitrate is
preferentially assimilated by tree roots relative to soil ammonium[citation needed].

Specific bacteria (e.g. Rhizobium trifolium) possess nitrogenase enzymes which can fix atmospheric
nitrogen (see nitrogen fixation) into a form (ammonium ion) that is chemically useful to higher
organisms. This process requires a large amount of energy and anoxic conditions. Such bacteria may live
freely in soil (e.g. Azotobacter) but normally exist in a symbiotic relationship in the root nodules of
leguminous plants (e.g. clover, Trifolium, or soybean plant, Glycine max). Nitrogen-fixing bacteria are
also symbiotic with a number of unrelated plant species such as alders (Alnus) spp., lichens, Casuarina,
Myrica, liverworts, and Gunnera.[25]

As part of the symbiotic relationship, the plant converts the 'fixed' ammonium ion to nitrogen oxides and
amino acids to form proteins and other molecules, (e.g. alkaloids). In return for the 'fixed' nitrogen, the
plant secretes sugars to the symbiotic bacteria.[25] Legumes maintain an anaerobic (oxygen free)
environment for their nitrogen-fixing bacteria.

Plants are able to assimilate nitrogen directly in the form of nitrates which may be present in soil from
natural mineral deposits, artificial fertilizers, animal waste, or organic decay (as the product of bacteria,
but not bacteria specifically associated with the plant). Nitrates absorbed in this fashion are converted to
nitrites by the enzyme nitrate reductase, and then converted to ammonia by another enzyme called nitrite
reductase.[25]

Nitrogen compounds are basic building blocks in animal biology as well. Animals use nitrogen-
containing amino acids from plant sources, as starting materials for all nitrogen-compound animal
biochemistry, including the manufacture of proteins and nucleic acids. Plant-feeding insects are
dependent on nitrogen in their diet, such that varying the amount of nitrogen fertilizer applied to a plant
can affect the reproduction rate of insects feeding on fertilized plants.[26]

Soluble nitrate is an important limiting factor in the growth of certain bacteria in ocean waters. [27] In many
places in the world, artificial fertilizers applied to crop-lands to increase yields result in run-off delivery
of soluble nitrogen to oceans at river mouths. This process can result in eutrophication of the water, as
nitrogen-driven bacterial growth depletes water oxygen to the point that all higher organisms die. Well-
known "dead zone" areas in the U.S. Gulf Coast and the Black Sea are due to this important polluting
process.

Many saltwater fish manufacture large amounts of trimethylamine oxide to protect them from the high
osmotic effects of their environment (conversion of this compound to dimethylamine is responsible for
the early odor in unfresh saltwater fish.[28] In animals, free radical nitric oxide (NO) (derived from an
amino acid), serves as an important regulatory molecule for circulation.[27]

Animal metabolism of NO results in production of nitrite. Animal metabolism of nitrogen in proteins


generally results in excretion of urea, while animal metabolism of nucleic acids results in excretion of
urea and uric acid. The characteristic odor of animal flesh decay is caused by the creation of long-chain,
nitrogen-containing amines, such as putrescine and cadaverine which are (respectively) breakdown
products of the amino acids ornithine and lysine in decaying proteins.[29]

494
Decay of organisms and their waste products may produce small amounts of nitrate, but most decay
eventually returns nitrogen content to the atmosphere, as molecular nitrogen. The circulation of nitrogen
from atmosphere, to organic compounds, then back to the atmosphere, is referred to as the nitrogen
cycle.[25]

Safety

Rapid release of nitrogen gas into an enclosed space can displace oxygen, and therefore represents an
asphyxiation hazard. This may happen with few warning symptoms, since the human carotid body is a
relatively slow and a poor low-oxygen (hypoxia) sensing system.[30] An example occurred shortly before
the launch of the first Space Shuttle mission in 1981, when two technicians lost consciousness (and one of
them died) after they walked into a space located in the Shuttle's Mobile Launcher Platform that was
pressurized with pure nitrogen as a precaution against fire. The technicians would have been able to exit
the room if they had experienced early symptoms from nitrogen-breathing.

When inhaled at high partial pressures (more than about 4 bar, encountered at depths below about 30 m in
scuba diving) nitrogen begins to act as an anesthetic agent. It can cause nitrogen narcosis, a temporary
semi-anesthetized state of mental impairment similar to that caused by nitrous oxide.[31][32]

Nitrogen also dissolves in the bloodstream and body fats. Rapid decompression (particularly in the case of
divers ascending too quickly, or astronauts decompressing too quickly from cabin pressure to spacesuit
pressure) can lead to a potentially fatal condition called decompression sickness (formerly known as
caisson sickness or more commonly, the "bends"), when nitrogen bubbles form in the bloodstream,
nerves, joints, and other sensitive or vital areas.[33][34] Other "inert" gases (those gases other than carbon
dioxide and oxygen) cause the same effects from bubbles composed of them, so replacement of nitrogen
in breathing gases may prevent nitrogen narcosis, but does not prevent decompression sickness.[35]

Direct skin contact with liquid nitrogen will eventually cause severe frostbite (cryogenic "burns"). This
may happen almost instantly on contact, or after a second or more, depending on the form of liquid
nitrogen. Bulk liquid nitrogen causes less rapid freezing than a spray of nitrogen mist (such as is used to
freeze certain skin growths in the practice of dermatology). The extra surface area provided by nitrogen-
soaked materials is also important, with soaked clothing or cotton causing far more rapid damage than a
spill of direct liquid to skin. Full "contact" between naked skin and large collected-droplets or pools of
liquid nitrogen may be prevented for second or two, by a layer of insulating gas from the Leidenfrost
effect. This may give the skin a second of protection from nitrogen bulk liquid. However, liquid nitrogen
applied to skin in mists, and on fabrics, bypasses this effect, and causes local frostbite immediately.

495
Water
From Wikipedia, the free encyclopedia

Jump to: navigation, search

This article is about general aspects of water. For a detailed discussion of its properties, see Properties
of water. For other uses, see Water (disambiguation).

Water in three states: liquid, solid (ice), and (invisible) water vapor in the air. Clouds are accumulations
of water droplets, condensed from vapor-saturated air.

Water is a chemical substance with the chemical formula H2O. Its molecule contains one oxygen and two
hydrogen atoms connected by covalent bonds. Water is a liquid at ambient conditions, but it often co-
exists on Earth with its solid state, ice, and gaseous state, water vapor or steam.

Water covers 70.9% of the Earth's surface,[1] and is vital for all known forms of life.[2] On Earth, it is
found mostly in oceans and other large water bodies, with 1.6% of water below ground in aquifers and
0.001% in the air as vapor, clouds (formed of solid and liquid water particles suspended in air), and
precipitation.[3] Oceans hold 97% of surface water, glaciers and polar ice caps 2.4%, and other land
surface water such as rivers, lakes and ponds 0.6%. A very small amount of the Earth's water is contained
within biological bodies and manufactured products.

Water on Earth moves continually through a cycle of evaporation or transpiration (evapotranspiration),


precipitation, and runoff, usually reaching the sea. Over land, evaporation and transpiration contribute to
the precipitation over land.

Clean drinking water is essential to humans and other lifeforms. Access to safe drinking water has
improved steadily and substantially over the last decades in almost every part of the world. [4][5] There is a
clear correlation between access to safe water and GDP per capita.[6] However, some observers have
estimated that by 2025 more than half of the world population will be facing water-based vulnerability.[7]
A recent report (November 2009) suggests that by 2030, in some developing regions of the world, water
demand will exceed supply by 50%.[8] Water plays an important role in the world economy, as it functions
as a solvent for a wide variety of chemical substances and facilitates industrial cooling and transportation.
Approximately 70% of freshwater is consumed by agriculture.[9]

496
Contents

[hide]

 1 Chemical and physical properties


 2 Taste and odor
 3 Distribution of water in nature
o 3.1 Water in the universe
o 3.2 Water and habitable zone
 4 Water on Earth
o 4.1 Water cycle
o 4.2 Fresh water storage
o 4.3 Sea water
o 4.4 Tides
 5 Effects on life
o 5.1 Aquatic life forms
 6 Effects on human civilization
o 6.1 Health and pollution
o 6.2 Human uses
 6.2.1 Agriculture
 6.2.2 Water as a scientific standard
 6.2.3 For drinking
 6.2.4 Washing
 6.2.5 Chemical uses
 6.2.6 Heat exchange
 6.2.7 Fire extinction
 6.2.8 Recreation
 6.2.9 Water industry
 6.2.10 Industrial applications
 6.2.11 Food processing
 7 Water law, water politics and water crisis
 8 Water in culture
o 8.1 Religion
o 8.2 Philosophy
o 8.3 Literature
 9 See also
o 9.1 Other topics
 10 References
 11 Further reading
o 11.1 Water as a natural resource
 12 External links

Chemical and physical properties

Main articles: Water (properties), Water (data page), and Water model

497
Model of hydrogen bonds between molecules of water

Impact from a water drop causes an upward "rebound" jet surrounded by circular capillary waves.

498
Snowflakes by Wilson Bentley, 1902

Dew drops adhering to a spider web

Capillary action of water compared to mercury

Water is the chemical substance with chemical formula H2O: one molecule of water has two hydrogen
atoms covalently bonded to a single oxygen atom.

Water appears in nature in all three common states of matter and may take many different forms on Earth:
water vapor and clouds in the sky; seawater and icebergs in the polar oceans; glaciers and rivers in the
mountains; and the liquid in aquifers in the ground.

At high temperatures and pressures, such as in the interior of giant planets, it is argued that water exists as
ionic water in which the molecules break down into a soup of hydrogen and oxygen ions, and at even
higher pressures as superionic water in which the oxygen crystallises but the hydrogen ions float around
freely within the oxygen lattice.[10]

499
The major chemical and physical properties of water are:

 Water is a liquid at standard temperature and pressure. It is tasteless and odorless. The intrinsic
color of water and ice is a very slight blue hue, although both appear colorless in small quantities.
Water vapor is essentially invisible as a gas.[11]

 Water is transparent in the visible electromagnetic spectrum. Thus aquatic plants can live in water
because sunlight can reach them. Ultra-violet and infrared light is strongly absorbed.

 Since the water molecule is not linear and the oxygen atom has a higher electronegativity than
hydrogen atoms, it carries a slight negative charge, whereas the hydrogen atoms are slightly
positive. As a result, water is a polar molecule with an electrical dipole moment. Water also can
form an unusually large number of intermolecular hydrogen bonds (four) for a molecule of its
size. These factors lead to strong attractive forces between molecules of water, giving rise to
water's high surface tension[12] and capillary forces. The capillary action refers to the tendency of
water to move up a narrow tube against the force of gravity. This property is relied upon by all
vascular plants, such as trees.

 Water is a good solvent and is often[quantify] referred to[by whom?] as the universal solvent. Substances
that dissolve in water, e.g., salts, sugars, acids, alkalis, and some gases – especially oxygen,
carbon dioxide (carbonation) are known as hydrophilic (water-loving) substances, while those
that do not mix well with water (e.g., fats and oils), are known as hydrophobic (water-fearing)
substances.

 All the major components in cells (proteins, DNA and polysaccharides) are also dissolved in
water.

 Pure water has a low electrical conductivity, but this increases significantly with the dissolution
of a small amount of ionic material such as sodium chloride.

 The boiling point of water (and all other liquids) is dependent on the barometric pressure. For
example, on the top of Mt. Everest water boils at 68 °C (154 °F), compared to 100 °C (212 °F) at
sea level. Conversely, water deep in the ocean near geothermal vents can reach temperatures of
hundreds of degrees and remain liquid.

 Water has the second highest molar specific heat capacity of any known substance, after
ammonia, as well as a high heat of vaporization (40.65 kJ·mol−1), both of which are a result of the
extensive hydrogen bonding between its molecules. These two unusual properties allow water to
moderate Earth's climate by buffering large fluctuations in temperature.

 The maximum density of water occurs at 3.98 °C (39.16 °F).[13] It has the anomalous property of
becoming less dense, not more, when it is cooled down to its solid form, ice. It expands to occupy
9% greater volume in this solid state, which accounts for the fact of ice floating on liquid water.

500
ADR label for transporting goods dangerously reactive with water

 Water is miscible with many liquids, such as ethanol, in all proportions, forming a single
homogeneous liquid. On the other hand, water and most oils are immiscible usually forming
layers according to increasing density from the top. As a gas, water vapor is completely miscible
with air.

 Water forms an azeotrope with many other solvents.

 Water can be split by electrolysis into hydrogen and oxygen.

 As an oxide of hydrogen, water is formed when hydrogen or hydrogen-containing compounds


burn or react with oxygen or oxygen-containing compounds. Water is not a fuel, it is an end-
product of the combustion of hydrogen. The energy required to split water into hydrogen and
oxygen by electrolysis or any other means is greater than the energy that can be collected when
the hydrogen and oxygen recombine.[14]

 Elements which are more electropositive than hydrogen such as lithium, sodium, calcium,
potassium and caesium displace hydrogen from water, forming hydroxides. Being a flammable
gas, the hydrogen given off is dangerous and the reaction of water with the more electropositive
of these elements may be violently explosive.

Taste and odor

Water can dissolve many different substances, giving it varying tastes and odors. Humans and other
animals have developed senses which enable them to evaluate the potability of water by avoiding water
that is too salty or putrid. The taste of spring water and mineral water, often advertised in marketing of
consumer products, derives from the minerals dissolved in it. However, pure H2O is tasteless and
odorless. The advertised purity of spring and mineral water refers to absence of toxins, pollutants and
microbes.

Distribution of water in nature

Water in the universe

501
Much of the universe's water is produced as a byproduct of star formation. When stars are born, their birth
is accompanied by a strong outward wind of gas and dust. When this outflow of material eventually
impacts the surrounding gas, the shock waves that are created compress and heat the gas. The water
observed is quickly produced in this warm dense gas.[15]

Water has been detected in interstellar clouds within our galaxy, the Milky Way. Water probably exists in
abundance in other galaxies, too, because its components, hydrogen and oxygen, are among the most
abundant elements in the universe. Interstellar clouds eventually condense into solar nebulae and solar
systems such as ours.

Water vapor is present in

 Atmosphere of Mercury: 3.4%, and large amounts of water in Mercury's exosphere[16]


 Atmosphere of Venus: 0.002%
 Earth's atmosphere: ~0.40% over full atmosphere, typically 1–4% at surface
 Atmosphere of Mars: 0.03%
 Atmosphere of Jupiter: 0.0004%
 Atmosphere of Saturn – in ices only
 Enceladus (moon of Saturn): 91%
 exoplanets known as HD 189733 b[17] and HD 209458 b.[18]

Liquid water is present on

 Earth – 71% of surface

Strong evidence suggests that liquid water is present just under the surface of Saturn's moon Enceladus.
Jupiter's moon Europa may have liquid water in the form as a 100 km deep subsurface ocean, which
would amount to more water than is in all the Earth's oceans.

Water ice is present on

 Earth – mainly as ice sheets


 polar ice caps on Mars
 Moon
 Titan
 Europa
 Saturn's rings[19]
 Enceladus
 Pluto and Charon[19]
 Comets and comet source populations (Kuiper belt and Oort cloud objects).

Water ice may be present on Ceres and Tethys. Water and other volatiles probably comprise much of the
internal structures of Uranus and Neptune and the water in the deeper layers may be in the form of ionic
water in which the molecules break down into a soup of hydrogen and oxygen ions, and deeper down as
superionic water in which the oxygen crystallises but the hydrogen ions float around freely within the
oxygen lattice.[10]

Some of the Moon's minerals contain water molecules. For instance, in 2008 a laboratory device which
ejects and identifies particles found small amounts of the compound in the inside of volcanic pearls
brought from Moon to Earth by the Apollo 15 crew in 1971.[20] NASA reported the detection of water

502
molecules by NASA's Moon Mineralogy Mapper aboard the Indian Space Research Organization's
Chandrayaan-1 spacecraft in September 2009.[21]

Water and habitable zone

The existence of liquid water, and to a lesser extent its gaseous and solid forms, on Earth are vital to the
existence of life on Earth as we know it. The Earth is located in the habitable zone of the solar system; if
it were slightly closer to or farther from the Sun (about 5%, or about 8 million kilometers), the conditions
which allow the three forms to be present simultaneously would be far less likely to exist.[22][23]

Earth's gravity allows it to hold an atmosphere. Water vapor and carbon dioxide in the atmosphere
provide a temperature buffer (greenhouse effect) which helps maintain a relatively steady surface
temperature. If Earth were smaller, a thinner atmosphere would allow temperature extremes, thus
preventing the accumulation of water except in polar ice caps (as on Mars).

The surface temperature of Earth has been relatively constant through geologic time despite varying
levels of incoming solar radiation (insolation), indicating that a dynamic process governs Earth's
temperature via a combination of greenhouse gases and surface or atmospheric albedo. This proposal is
known as the Gaia hypothesis.

The state of water on a planet depends on ambient pressure, which is determined by the planet's gravity. If
a planet is sufficiently massive, the water on it may be solid even at high temperatures, because of the
high pressure caused by gravity, as it was observed on exoplanets Gliese 436 b[24] and GJ 1214 b.[25]

There are various theories about origin of water on Earth.

Water on Earth

Main articles: Hydrology and Water distribution on Earth

A graphical distribution of the locations of water on Earth.

503
Water covers 71% of the Earth's surface; the oceans contain 97.2% of the Earth's water. The Antarctic ice
sheet, which contains 61% of all fresh water on Earth, is visible at the bottom. Condensed atmospheric
water can be seen as clouds, contributing to the Earth's albedo.

Hydrology is the study of the movement, distribution, and quality of water throughout the Earth. The
study of the distribution of water is hydrography. The study of the distribution and movement of
groundwater is hydrogeology, of glaciers is glaciology, of inland waters is limnology and distribution of
oceans is oceanography. Ecological processes with hydrology are in focus of ecohydrology.

The collective mass of water found on, under, and over the surface of a planet is called the hydrosphere.
Earth's approximate water volume (the total water supply of the world) is 1,360,000,000 km3
(326,000,000 mi3).

Groundwater and fresh water are useful or potentially useful to humans as water resources.

Liquid water is found in bodies of water, such as an ocean, sea, lake, river, stream, canal, pond, or puddle.
The majority of water on Earth is sea water. Water is also present in the atmosphere in solid, liquid, and
vapor states. It also exists as groundwater in aquifers.

Water is important in many geological processes. Groundwater is present in most rocks, and the pressure
of this groundwater affects patterns of faulting. Water in the mantle is responsible for the melt that
produces volcanoes at subduction zones. On the surface of the Earth, water is important in both chemical
and physical weathering processes. Water and, to a lesser but still significant extent, ice, are also
responsible for a large amount of sediment transport that occurs on the surface of the earth. Deposition of
transported sediment forms many types of sedimentary rocks, which make up the geologic record of Earth
history.

Water cycle

Main article: Water cycle

504
Water cycle

The water cycle (known scientifically as the hydrologic cycle) refers to the continuous exchange of water
within the hydrosphere, between the atmosphere, soil water, surface water, groundwater, and plants.

Water moves perpetually through each of these regions in the water cycle consisting of following transfer
processes:

 evaporation from oceans and other water bodies into the air and transpiration from land plants
and animals into air.
 precipitation, from water vapor condensing from the air and falling to earth or ocean.
 runoff from the land usually reaching the sea.

Most water vapor over the oceans returns to the oceans, but winds carry water vapor over land at the same
rate as runoff into the sea, about 36 Tt per year. Over land, evaporation and transpiration contribute
another 71 Tt per year. Precipitation, at a rate of 107 Tt per year over land, has several forms: most
commonly rain, snow, and hail, with some contribution from fog and dew. Condensed water in the air
may also refract sunlight to produce rainbows.

Water runoff often collects over watersheds flowing into rivers. A mathematical model used to simulate
river or stream flow and calculate water quality parameters is hydrological transport model. Some of
water is diverted to irrigation for agriculture. Rivers and seas offer opportunity for travel and commerce.
Through erosion, runoff shapes the environment creating river valleys and deltas which provide rich soil
and level ground for the establishment of population centers. A flood occurs when an area of land, usually
low-lying, is covered with water. It is when a river overflows its banks or flood from the sea. A drought is
an extended period of months or years when a region notes a deficiency in its water supply. This occurs
when a region receives consistently below average precipitation.

Fresh water storage

505
High tide (left) and low tide (right)

Main article: Water resources

Some runoff water is trapped for periods of time, for example in lakes. At high altitude, during winter,
and in the far north and south, snow collects in ice caps, snow pack and glaciers. Water also infiltrates the
ground and goes into aquifers. This groundwater later flows back to the surface in springs, or more
spectacularly in hot springs and geysers. Groundwater is also extracted artificially in wells. This water
storage is important, since clean, fresh water is essential to human and other land-based life. In many
parts of the world, it is in short supply.

Sea water

Sea water contains about 3.5% salt on average, plus smaller amounts of other substances. The physical
properties of sea water differ from fresh water in some important respects. It freezes at a lower
temperature (about −1.9 °C) and its density increases with decreasing temperature to the freezing point,
instead of reaching maximum density at a temperature above freezing. The salinity of water in major seas
varies from about 0.7% in the Baltic Sea to 4.0% in the Red Sea.

Tides

Tides are the cyclic rising and falling of local sea levels caused by the tidal forces of the Moon and the
Sun acting on the oceans. Tides cause changes in the depth of the marine and estuarine water bodies and
produce oscillating currents known as tidal streams. The changing tide produced at a given location is the
result of the changing positions of the Moon and Sun relative to the Earth coupled with the effects of
Earth rotation and the local bathymetry. The strip of seashore that is submerged at high tide and exposed
at low tide, the intertidal zone, is an important ecological product of ocean tides.

Effects on life

506
An oasis is an isolated water source with vegetation in desert

Overview of photosynthesis and respiration. Water (at right), together with carbon dioxide (CO2), form
oxygen and organic compounds (at left), which can be respired to water and (CO2).

From a biological standpoint, water has many distinct properties that are critical for the proliferation of
life that set it apart from other substances. It carries out this role by allowing organic compounds to react
in ways that ultimately allow replication. All known forms of life depend on water. Water is vital both as
a solvent in which many of the body's solutes dissolve and as an essential part of many metabolic
processes within the body. Metabolism is the sum total of anabolism and catabolism. In anabolism, water
is removed from molecules (through energy requiring enzymatic chemical reactions) in order to grow
larger molecules (e.g. starches, triglycerides and proteins for storage of fuels and information). In
catabolism, water is used to break bonds in order to generate smaller molecules (e.g. glucose, fatty acids
and amino acids to be used for fuels for energy use or other purposes). Without water, these particular
metabolic processes could not exist.

Water is fundamental to photosynthesis and respiration. Photosynthetic cells use the sun's energy to split
off water's hydrogen from oxygen. Hydrogen is combined with CO2 (absorbed from air or water) to form
glucose and release oxygen. All living cells use such fuels and oxidize the hydrogen and carbon to capture
the sun's energy and reform water and CO2 in the process (cellular respiration).

507
Water is also central to acid-base neutrality and enzyme function. An acid, a hydrogen ion (H+, that is, a
proton) donor, can be neutralized by a base, a proton acceptor such as hydroxide ion (OH−) to form water.
Water is considered to be neutral, with a pH (the negative log of the hydrogen ion concentration) of 7.
Acids have pH values less than 7 while bases have values greater than 7.

Some of the biodiversity of a coral reef

Aquatic life forms

Main articles: Hydrobiology and Aquatic plant

Some marine diatoms – a key phytoplankton group

Earth's surface waters are filled with life. The earliest life forms appeared in water; nearly all fish live
exclusively in water, and there are many types of marine mammals, such as dolphins and whales. Some
kinds of animals, such as amphibians, spend portions of their lives in water and portions on land. Plants
such as kelp and algae grow in the water and are the basis for some underwater ecosystems. Plankton is
generally the foundation of the ocean food chain.

508
Aquatic vertebrates must obtain oxygen to survive, and they do so in various ways. Fish have gills instead
of lungs, although some species of fish, such as the lungfish, have both. Marine mammals, such as
dolphins, whales, otters, and seals need to surface periodically to breathe air. Some amphibians are able to
absorb oxygen through their skin. Invertebrates exhibit a wide range of modifications to survive in poorly
oxygenated waters including breathing tubes (see insect and mollusc siphons) and gills (Carcinus).
However as invertebrate life evolved in an aquatic habitat most have little or no specialisation for
respiration in water.

Effects on human civilization

Water fountain

Civilization has historically flourished around rivers and major waterways; Mesopotamia, the so-called
cradle of civilization, was situated between the major rivers Tigris and Euphrates; the ancient society of
the Egyptians depended entirely upon the Nile. Large metropolises like Rotterdam, London, Montreal,
Paris, New York City, Buenos Aires, Shanghai, Tokyo, Chicago, and Hong Kong owe their success in
part to their easy accessibility via water and the resultant expansion of trade. Islands with safe water ports,
like Singapore, have flourished for the same reason. In places such as North Africa and the Middle East,
where water is more scarce, access to clean drinking water was and is a major factor in human
development.

Health and pollution

Environmental Science Program, Iowa State University student sampling water.

509
Water fit for human consumption is called drinking water or potable water. Water that is not potable may
be made potable by filtration or distillation, or by a range of other methods.

Water that is not fit for drinking but is not harmful for humans when used for swimming or bathing is
called by various names other than potable or drinking water, and is sometimes called safe water, or "safe
for bathing". Chlorine is a skin and mucous membrane irritant that is used to make water safe for bathing
or drinking. Its use is highly technical and is usually monitored by government regulations (typically 1
part per million (ppm) for drinking water, and 1–2 ppm of chlorine not yet reacted with impurities for
bathing water). Water for bathing may be maintained in satisfactory microbiological condition using
chemical disinfectants such as chlorine or ozone or by the use of ultraviolet light.

In the USA, non-potable forms of wastewater generated by humans may be referred to as greywater,
which is treatable and thus easily able to be made potable again, and blackwater, which generally contains
sewage and other forms of waste which require further treatment in order to be made reusable. Greywater
composes 50–80% of residential wastewater generated by a household's sanitation equipment (sinks,
showers and kitchen runoff, but not toilets, which generate blackwater.) These terms may have different
meanings in other countries and cultures.

This natural resource is becoming scarcer in certain places, and its availability is a major social and
economic concern. Currently, about a billion people around the world routinely drink unhealthy water.
Most countries accepted the goal of halving by 2015 the number of people worldwide who do not have
access to safe water and sanitation during the 2003 G8 Evian summit.[26] Even if this difficult goal is met,
it will still leave more than an estimated half a billion people without access to safe drinking water and
over a billion without access to adequate sanitation. Poor water quality and bad sanitation are deadly;
some five million deaths a year are caused by polluted drinking water. The World Health Organization
estimates that safe water could prevent 1.4 million child deaths from diarrhea each year.[27] Water,
however, is not a finite resource, but rather re-circulated as potable water in precipitation in quantities
many degrees of magnitude higher than human consumption. Therefore, it is the relatively small quantity
of water in reserve in the earth (about 1% of our drinking water supply, which is replenished in aquifers
around every 1 to 10 years), that is a non-renewable resource, and it is, rather, the distribution of potable
and irrigation water which is scarce, rather than the actual amount of it that exists on the earth. Water-
poor countries use importation of goods as the primary method of importing water (to leave enough for
local human consumption), since the manufacturing process uses around 10 to 100 times products' masses
in water.

In the developing world, 90% of all wastewater still goes untreated into local rivers and streams.[28] Some
50 countries, with roughly a third of the world’s population, also suffer from medium or high water stress,
and 17 of these extract more water annually than is recharged through their natural water cycles.[29] The
strain not only affects surface freshwater bodies like rivers and lakes, but it also degrades groundwater
resources.

Human uses

Further information: Water supply

510
Agriculture

Irrigation of field crops

The most important use of water in agriculture is for irrigation, which is a key component to produce
enough food. Irrigation takes up to 90% of water withdrawn in some developing countries[30] and
significant proportions in more economically developed countries (United States, 30% of freshwater
usage is for irrigation).[31]

Water as a scientific standard

On 7 April 1795, the gram was defined in France to be equal to "the absolute weight of a volume of pure
water equal to a cube of one hundredth of a meter, and to the temperature of the melting ice." [32] For
practical purposes though, a metallic reference standard was required, one thousand times more massive,
the kilogram. Work was therefore commissioned to determine precisely the mass of one liter of water. In
spite of the fact that the decreed definition of the gram specified water at 0 °C—a highly reproducible
temperature—the scientists chose to redefine the standard and to perform their measurements at the
temperature of highest water density, which was measured at the time as 4 °C (39 °F).[33]

The Kelvin temperature scale of the SI system is based on the triple point of water, defined as exactly
273.16 K or 0.01 °C. The scale is an absolute temperature scale with the same increment as the Celsius
temperature scale, which was originally defined according the boiling point (set to 100 °C) and melting
point (set to 0 °C) of water.

Natural water consists mainly of the isotopes hydrogen-1 and oxygen-16, but there is also small quantity
of heavier isotopes such as hydrogen-2 (deuterium). The amount of deuterium oxides or heavy water is
very small, but it still affects the properties of water. Water from rivers and lakes tends to contain less
deuterium than seawater. Therefore, standard water is defined in the Vienna Standard Mean Ocean Water
specification.

For drinking
Main article: Drinking water

511
A young girl drinking bottled water

Water quality: fraction of population using improved water sources by country

The human body contains anywhere from 55% to 78% water depending on body size. [34] To function
properly, the body requires between one and seven liters of water per day to avoid dehydration; the
precise amount depends on the level of activity, temperature, humidity, and other factors. Most of this is
ingested through foods or beverages other than drinking straight water. It is not clear how much water
intake is needed by healthy people, though most advocates agree that 6–7 glasses of water (approximately
2 liters) daily is the minimum to maintain proper hydration.[35] Medical literature favors a lower
consumption, typically 1 liter of water for an average male, excluding extra requirements due to fluid loss
from exercise or warm weather.[36] For those who have healthy kidneys, it is rather difficult to drink too
much water, but (especially in warm humid weather and while exercising) it is dangerous to drink too
little. People can drink far more water than necessary while exercising, however, putting them at risk of
water intoxication (hyperhydration), which can be fatal. The popular claim that "a person should consume
eight glasses of water per day" seems to have no real basis in science. [37] Similar misconceptions
concerning the effect of water on weight loss and constipation have also been dispelled.[38]

512
Hazard symbol for Not drinking water

An original recommendation for water intake in 1945 by the Food and Nutrition Board of the National
Research Council read: "An ordinary standard for diverse persons is 1 milliliter for each calorie of food.
Most of this quantity is contained in prepared foods."[39] The latest dietary reference intake report by the
United States National Research Council in general recommended (including food sources): 2.7 liters of
water total for women and 3.7 liters for men.[40] Specifically, pregnant and breastfeeding women need
additional fluids to stay hydrated. According to the Institute of Medicine—who recommend that, on
average, women consume 2.2 liters and men 3.0 liters—this is recommended to be 2.4 liters (10 cups) for
pregnant women and 3 liters (12 cups) for breastfeeding women since an especially large amount of fluid
is lost during nursing.[41] Also noted is that normally, about 20% of water intake comes from food, while
the rest comes from drinking water and beverages (caffeinated included). Water is excreted from the body
in multiple forms; through urine and faeces, through sweating, and by exhalation of water vapor in the
breath. With physical exertion and heat exposure, water loss will increase and daily fluid needs may
increase as well.

Humans require water that does not contain too many impurities. Common impurities include metal salts
and oxides (including copper, iron, calcium and lead)[42] and/or harmful bacteria, such as Vibrio. Some
solutes are acceptable and even desirable for taste enhancement and to provide needed electrolytes.[43]

The single largest (by volume) freshwater resource suitable for drinking is Lake Baikal in Siberia.[44]

Washing

The propensity of water to form solutions and emulsions is useful in various washing processes. Many
industrial processes rely on reactions using chemicals dissolved in water, suspension of solids in water
slurries or using water to dissolve and extract substances. Washing is also an important component of
several aspects of personal body hygiene.

Chemical uses

Water is widely used in chemical reactions as a solvent or reactant and less commonly as a solute or
catalyst. In inorganic reactions, water is a common solvent, dissolving many ionic compounds. In organic
reactions, it is not usually used as a reaction solvent, because it does not dissolve the reactants well and is

513
amphoteric (acidic and basic) and nucleophilic. Nevertheless, these properties are sometimes desirable.
Also, acceleration of Diels-Alder reactions by water has been observed. Supercritical water has recently
been a topic of research. Oxygen-saturated supercritical water combusts organic pollutants efficiently.

Heat exchange

Ice used for cooling.

Water and steam are used as heat transfer fluids in diverse heat exchange systems, due to its availability
and high heat capacity, both as a coolant and for heating. Cool water may even be naturally available
from a lake or the sea. Condensing steam is a particularly efficient heating fluid because of the large heat
of vaporization. A disadvantage is that water and steam are somewhat corrosive. In almost all electric
power stations, water is the coolant, which vaporizes and drives steam turbines to drive generators. In the
U.S., cooling power plants is the largest use of water.[31]

In the nuclear power industry, water can also be used as a neutron moderator. In most nuclear reactors,
water is both a coolant and a moderator. This provides something of a passive safety measure, as
removing the water from the reactor also slows the nuclear reaction down – however other methods are
favored for stopping a reaction and it is preferred to keep the nuclear core covered with water so as to
ensure adequate cooling.

Fire extinction

514
Water is used for fighting wildfires.

Water has a high heat of vaporization and is relatively inert, which makes it a good fire extinguishing
fluid. The evaporation of water carries heat away from the fire. However, only distilled water can be used
to fight fires of electric equipment, because impure water is electrically conductive. Water is not suitable
for use on fires of oils and organic solvents, because they float on water and the explosive boiling of
water tends to spread the burning liquid.

Use of water in fire fighting should also take into account the hazards of a steam explosion, which may
occur when water is used on very hot fires in confined spaces, and of a hydrogen explosion, when
substances which react with water, such as certain metals or hot graphite, decompose the water, producing
hydrogen gas.

The power of such explosions was seen in the Chernobyl disaster, although the water involved did not
come from fire-fighting at that time but the reactor's own water cooling system. A steam explosion
occurred when the extreme over-heating of the core caused water to flash into steam. A hydrogen
explosion may have occurred as a result of reaction between steam and hot zirconium.

Recreation

Grand Anse Beach, St. George's, Grenada, West Indies, often reported as one of the top 10 beaches in the
world.

Main article: Water sport (recreation)

Humans use water for many recreational purposes, as well as for exercising and for sports. Some of these
include swimming, waterskiing, boating, surfing and diving. In addition, some sports, like ice hockey and
ice skating, are played on ice. Lakesides, beaches and waterparks are popular places for people to go to
relax and enjoy recreation. Many find the sound and appearance of flowing water to be calming, and
fountains and other water features are popular decorations. Some keep fish and other life in aquariums or
ponds for show, fun, and companionship. Humans also use water for snow sports i.e. skiing, sledding,
snowmobiling or snowboarding, which requires the water to be frozen. People may also use water for
play fighting such as with snowballs, water guns or water balloons.

515
Water industry

A water-carrier in India, 1882. In many places where running water was not available, water had to be
transported by people.

A manual water pump in China

516
Water purification facility

The water industry provides drinking water and wastewater services (including sewage treatment) to
households and industry. Water supply facilities include water wells cisterns for rainwater harvesting,
water supply network, water purification facilities, water tanks, water towers, water pipes including old
aqueducts. Atmospheric water generators are in development.

Drinking water is often collected at springs, extracted from artificial borings (wells) in the ground, or
pumped from lakes and rivers. Building more wells in adequate places is thus a possible way to produce
more water, assuming the aquifers can supply an adequate flow. Other water sources include rainwater
collection. Water may require purification for human consumption. This may involve removal of
undissolved substances, dissolved substances and harmful microbes. Popular methods are filtering with
sand which only removes undissolved material, while chlorination and boiling kill harmful microbes.
Distillation does all three functions. More advanced techniques exist, such as reverse osmosis.
Desalination of abundant seawater is a more expensive solution used in coastal arid climates.

The distribution of drinking water is done through municipal water systems, tanker delivery or as bottled
water. Governments in many countries have programs to distribute water to the needy at no charge.
Others[who?] argue that the market mechanism and free enterprise are best to manage this rare resource and
to finance the boring of wells or the construction of dams and reservoirs.

Reducing usage by using drinking (potable) water only for human consumption is another option. In some
cities such as Hong Kong, sea water is extensively used for flushing toilets citywide in order to conserve
fresh water resources.

Polluting water may be the biggest single misuse of water; to the extent that a pollutant limits other uses
of the water, it becomes a waste of the resource, regardless of benefits to the polluter. Like other types of
pollution, this does not enter standard accounting of market costs, being conceived as externalities for
which the market cannot account. Thus other people pay the price of water pollution, while the private
firms' profits are not redistributed to the local population victim of this pollution. Pharmaceuticals
consumed by humans often end up in the waterways and can have detrimental effects on aquatic life if
they bioaccumulate and if they are not biodegradable.

Wastewater facilities are storm sewers and wastewater treatment plants. Another way to remove pollution
from surface runoff water is bioswale.

Industrial applications

Water is used in power generation. Hydroelectricity is electricity obtained from hydropower.


Hydroelectric power comes from water driving a water turbine connected to a generator. Hydroelectricity
is a low-cost, non-polluting, renewable energy source. The energy is supplied by the sun. Heat from the
sun evaporates water, which condenses as rain in higher altitudes, from where it flows down.

517
Three Gorges Dam is the largest hydro-electric power station.

Pressurized water is used in water blasting and water jet cutters. Also, very high pressure water guns are
used for precise cutting. It works very well, is relatively safe, and is not harmful to the environment. It is
also used in the cooling of machinery to prevent over-heating, or prevent saw blades from over-heating.

Water is also used in many industrial processes and machines, such as the steam turbine and heat
exchanger, in addition to its use as a chemical solvent. Discharge of untreated water from industrial uses
is pollution. Pollution includes discharged solutes (chemical pollution) and discharged coolant water
(thermal pollution). Industry requires pure water for many applications and utilizes a variety of
purification techniques both in water supply and discharge.

Food processing

Water can be used to cook foods such as noodles.

Water plays many critical roles within the field of food science. It is important for a food scientist to
understand the roles that water plays within food processing to ensure the success of their products.

518
Solutes such as salts and sugars found in water affect the physical properties of water. The boiling and
freezing points of water are affected by solutes, as well as air pressure, which is in turn affected by
altitude. Water boils at lower temperatures with the lower air pressure which occurs at higher elevations.
One mole of sucrose (sugar) per kilogram of water raises the boiling point of water by 0.51 °C, and one
mole of salt per kg raises the boiling point by 1.02 °C; similarly, increasing the number of dissolved
particles lowers water's freezing point.[45] Solutes in water also affect water activity which affects many
chemical reactions and the growth of microbes in food.[46] Water activity can be described as a ratio of the
vapor pressure of water in a solution to the vapor pressure of pure water. [45] Solutes in water lower water
activity. This is important to know because most bacterial growth ceases at low levels of water activity. [46]
Not only does microbial growth affect the safety of food but also the preservation and shelf life of food.

Water hardness is also a critical factor in food processing. It can dramatically affect the quality of a
product as well as playing a role in sanitation. Water hardness is classified based on the amounts of
removable calcium carbonate salt it contains per gallon. Water hardness is measured in grains; 0.064 g
calcium carbonate is equivalent to one grain of hardness.[45] Water is classified as soft if it contains 1 to 4
grains, medium if it contains 5 to 10 grains and hard if it contains 11 to 20 grains.[vague] [45] The hardness of
water may be altered or treated by using a chemical ion exchange system. The hardness of water also
affects its pH balance which plays a critical role in food processing. For example, hard water prevents
successful production of clear beverages. Water hardness also affects sanitation; with increasing hardness,
there is a loss of effectiveness for its use as a sanitizer.[45]

Boiling, steaming, and simmering are popular cooking methods that often require immersing food in
water or its gaseous state, steam. Water is also used for dishwashing.

Water law, water politics and water crisis

An estimate of the share of people in developing countries with access to potable water 1970–2000

Main articles: Water law, Water right, and Water crisis

Water politics is politics affected by water and water resources. For this reason, water is a strategic
resource in the globe and an important element in many political conflicts. It causes health impacts and
damage to biodiversity.

519
1.6 billion people have gained access to a safe water source since 1990.[47] The proportion of people in
developing countries with access to safe water is calculated to have improved from 30% in 1970 [4] to 71%
in 1990, 79% in 2000 and 84% in 2004. This trend is projected to continue. [5] To halve, by 2015, the
proportion of people without sustainable access to safe drinking water is one of the Millennium
Development Goals. This goal is projected to be reached.

A 2006 United Nations report stated that "there is enough water for everyone", but that access to it is
hampered by mismanagement and corruption.[48] In addition, global initiatives to improve the efficiency
of aid delivery, such as the Paris Declaration on Aid Effectiveness, have not been taken up by water
sector donors as effectively as they have in education and health, potentially leaving multiple donors
working on overlapping projects and recipient governments without empowerment to act.[49]

The UN World Water Development Report (WWDR, 2003) from the World Water Assessment Program
indicates that, in the next 20 years, the quantity of water available to everyone is predicted to decrease by
30%. 40% of the world's inhabitants currently have insufficient fresh water for minimal hygiene. More
than 2.2 million people died in 2000 from waterborne diseases (related to the consumption of
contaminated water) or drought. In 2004, the UK charity WaterAid reported that a child dies every 15
seconds from easily preventable water-related diseases; often this means lack of sewage disposal; see
toilet.

Organizations concerned with water protection include International Water Association (IWA),
WaterAid, Water 1st, American Water Resources Association. Water related conventions are United
Nations Convention to Combat Desertification (UNCCD), International Convention for the Prevention of
Pollution from Ships, United Nations Convention on the Law of the Sea and Ramsar Convention. World
Day for Water takes place on 22 March and World Ocean Day on 8 June.

Water used in the production of a good or service is virtual water.

Water in culture

Religion

Main article: Water and religion

Water is considered a purifier in most religions. Major faiths that incorporate ritual washing (ablution)
include Christianity, Islam, Hinduism, Rastafari movement, Shinto, Taoism, Judaism, and Wicca.
Immersion (or aspersion or affusion) of a person in water is a central sacrament of Christianity (where it
is called baptism); it is also a part of the practice of other religions, including Judaism (mikvah) and
Sikhism (Amrit Sanskar). In addition, a ritual bath in pure water is performed for the dead in many
religions including Judaism and Islam. In Islam, the five daily prayers can be done in most cases (see
Tayammum) after completing washing certain parts of the body using clean water (wudu). In Shinto,
water is used in almost all rituals to cleanse a person or an area (e.g., in the ritual of misogi). Water is
mentioned numerous times in the Bible, for example: "The earth was formed out of water and by water"
(NIV). In the Qur'an it is stated that "Living things are made of water" and it is often used to describe
paradise.

Philosophy

The Ancient Greek philosopher Empedocles held that water is one of the four classical elements along
with fire, earth and air, and was regarded as the ylem, or basic substance of the universe. Water was

520
considered cold and moist. In the theory of the four bodily humors, water was associated with phlegm.
The classical element of Water was also one of the five elements in traditional Chinese philosophy, along
with earth, fire, wood, and metal.

Water is also taken as a role model in some parts of traditional and popular Asian philosophy. James
Legge's 1891 translation of the Dao De Jing states "The highest excellence is like (that of) water. The
excellence of water appears in its benefiting all things, and in its occupying, without striving (to the
contrary), the low place which all men dislike. Hence (its way) is near to (that of) the Tao" and "There is
nothing in the world more soft and weak than water, and yet for attacking things that are firm and strong
there is nothing that can take precedence of it—for there is nothing (so effectual) for which it can be
changed."[50]

Literature

Water is used in literature as a symbol of purification. Examples include the critical importance of a river
in As I Lay Dying by William Faulkner and the drowning of Ophelia in Hamlet.

Sherlock Holmes held that "From a drop of water, a logician could infer the possibility of an Atlantic or a
Niagara without having seen or heard of one or the other."[51]

Properties of water

From Wikipedia, the free encyclopedia

Jump to: navigation, search

"H2O" and "HOH" redirect here. For other uses, see H2O (disambiguation) and HOH (disambiguation).

This article is about the physical and chemical properties of pure water. For general discussion and its
distribution and importance in life, see Water. For other uses, see Water (disambiguation).

Water (H2O)

IUPAC name[hide]
Water
Oxidane

Other names[hide]
Hydrogen oxide
Dihydrogen monoxide

521
Hydrogen monoxide

Identifiers

CAS number 7732-18-5

PubChem 962

ChemSpider 937

UNII 059QF0KO0R

ChEBI CHEBI:15377

ChEMBL CHEMBL1098659

RTECS number ZC0110000

SMILES
[show]

InChI
[show]

Properties

Molecular formula H2O

Molar mass 18.01528(33) g/mol

white solid or almost colorless,


Appearance transparent, with a slight hint of
blue, crystalline solid or liquid [1]

1000 kg/m3, liquid (4 °C) (62.4


Density lb/cu. ft)
917 kg/m3, solid

522
Melting point 0 °C, 32 °F (273.15 K)[2]

Boiling point
99.98 °C, 212 °F (373.13 K)[2]

15.74
Acidity (pKa)
~35–36

Basicity (pKb) 15.74

Refractive index (nD) 1.3330

Viscosity 0.001 Pa s at 20 °C

Structure

Crystal structure Hexagonal

Molecular shape Bent

Dipole moment 1.85 D

Hazards

Drowning (see also Dihydrogen


Main hazards
monoxide hoax)

NFPA 704
0

Related compounds

Hydrogen sulfide
Other cations Hydrogen selenide
Hydrogen telluride
Hydrogen polonide

523
Hydrogen peroxide

acetone
Related solvents
methanol

water vapor
Related compounds ice
heavy water

(what is this?) (verify)


Except where noted otherwise, data are given for materials
in their standard state (at 25 °C, 100 kPa)

Infobox references

Water (H2O) is the most abundant compound on Earth's surface, covering about 70% of the planet's
surface. In nature it exists in liquid, solid, and gaseous states. It is in dynamic equilibrium between the
liquid and gas states at standard temperature and pressure. At room temperature, it is a nearly colorless
with a hint of blue, tasteless, and odorless liquid. Many substances dissolve in water and it is commonly
referred to as the universal solvent. Because of this, water in nature and in use is rarely pure and some of
its properties may vary slightly from those of the pure substance. However, there are many compounds
that are essentially, if not completely, insoluble in water. Water is the only common substance found
naturally in all three common states of matter and it is essential for life on Earth.[3] Water usually makes
up 55% to 78% of the human body.[4]

Contents

[hide]

 1 Forms of water
 2 Physics and chemistry
o 2.1 Water, ice and vapor
 2.1.1 Heat capacity and heats of vaporization and fusion
 2.1.2 Density of water and ice
 2.1.3 Density of saltwater and ice
 2.1.4 Miscibility and condensation
 2.1.5 Vapor pressure
 2.1.6 Compressibility
 2.1.7 Triple point
o 2.2 Electrical properties
 2.2.1 Electrical conductivity
 2.2.2 Electrolysis
o 2.3 Polarity and hydrogen bonding
 2.3.1 Cohesion and adhesion
 2.3.2 Surface tension
 2.3.3 Capillary action

524
 2.3.4 Water as a solvent
o 2.4 Water in acid-base reactions
 2.4.1 Ligand chemistry
 2.4.2 Organic chemistry
 2.4.3 Acidity in nature
o 2.5 Water in redox reactions
o 2.6 Geochemistry
o 2.7 Transparency
o 2.8 Heavy water and isotopologues
 3 History
 4 Systematic naming
 5 See also
 6 References
 7 External links

[edit] Forms of water

Like many substances, water can take numerous forms that are broadly categorized by phase of matter.
The liquid phase is the most common among water's phases (with in the earth's atmosphere and surface)
and is the form that's generally denoted by the word "water." The solid phase of water is known as ice and
commonly takes the structure of hard, amalgamated crystals, such as ice cubes, or loosely accumulated
granular crystals, like snow. For a list of the many different crystalline and amorphous forms of solid
H2O, see the article ice. The gaseous phase of water is known as water vapor (or steam), and is
characterized by water assuming the configuration of a transparent cloud. The fourth state of water, that
of a supercritical fluid, is much less common than the other three and only rarely occurs in nature, in
extremely uninhabitable conditions. When water achieves a specific critical temperature and a specific
critical pressure (647 K and 22.064 MPa), liquid and gas phase merge to one homogeneous fluid phase,
with properties of both gas and liquid. One example of naturally occurring supercritical water is in the
hottest parts of deep water hydrothermal vents, in which water is heated to the critical temperature by
scalding volcanic plumes and achieves the critical pressure because of the crushing weight of the ocean at
the extreme depths at which the vents are located. Additionally, anywhere there is volcanic activity below
a depth of 2.25 km (1.4 miles) can be expected to have water in the supercritical phase.[5]

Vienna Standard Mean Ocean Water is the current international standard for water isotopes. Naturally
occurring water is almost completely composed of the neutron-less hydrogen isotope protium. Only 155
ppm include deuterium (2H or D), a hydrogen isotope with one neutron, and less than 20 parts per
quintillion include tritium (3H or T), which has two.

Heavy water is water with a higher-than-average deuterium content, up to 100%. Chemically, it is similar
but not identical to normal water. This is because the nucleus of deuterium is twice as heavy as protium,
and thus causes noticeable differences in bonding energies. Because water molecules exchange hydrogen
atoms with one another, hydrogen deuterium oxide (DOH) is much more common in low-purity heavy
water than pure dideuterium monoxide (D2O). Humans are generally unaware of taste differences,[6] but
sometimes report a burning sensation[7] or sweet flavor.[8] Rats, however, are able to avoid heavy water by
smell.[9] Toxic to many animals,[9] heavy water is used in the nuclear reactor industry to moderate (slow
down) neutrons. Light water reactors are also common, where "light" simply designates normal water.

525
Light water more specifically refers to deuterium-depleted water (DDW), water whose deuterium
content has been reduced below the standard 155ppm level. Light water has been found to be beneficial
improving cancer survival rates in mice[10] and humans undergoing chemotherapy.[11]

[edit] Physics and chemistry

See also: Water chemistry analysis

Water is the chemical substance with chemical formula H2O: one molecule of water has two hydrogen
atoms covalently bonded to a single oxygen atom.[12] Water is a tasteless, odorless liquid at ambient
temperature and pressure, and appears colorless in small quantities, although it has its own intrinsic very
light blue hue. Ice also appears colorless, and water vapor is essentially invisible as a gas.[1]

Water is primarily a liquid under standard conditions, which is not predicted from its relationship to other
analogous hydrides of the oxygen family in the periodic table, which are gases such as hydrogen sulfide.
Also the elements surrounding oxygen in the periodic table, nitrogen, fluorine, phosphorus, sulfur and
chlorine, all combine with hydrogen to produce gases under standard conditions. The reason that water
forms a liquid is that oxygen is more electronegative than all of these elements with the exception of
fluorine. Oxygen attracts electrons much more strongly than hydrogen, resulting in a net positive charge
on the hydrogen atoms, and a net negative charge on the oxygen atom. The presence of a charge on each
of these atoms gives each water molecule a net dipole moment. Electrical attraction between water
molecules due to this dipole pulls individual molecules closer together, making it more difficult to
separate the molecules and therefore raising the boiling point. This attraction is known as hydrogen
bonding. The molecules of water are constantly moving in relation to each other, and the hydrogen bonds
are continually breaking and reforming at timescales faster than 200 femtoseconds.[13] However, this bond
is strong enough to create many of the peculiar properties of water described in this article, such as the
those that make it integral to life. Water can be described as a polar liquid that slightly dissociates
disproportionately into the hydronium ion (H3O+(aq)) and an associated hydroxide ion (OH−(aq)).

2 H2O (l) H3O+ (aq) + OH− (aq)

The dissociation constant for this dissociation is commonly symbolized as Kw and has a value of about
10−14 at 25 °C; see "Water (data page)" and "Self-ionization of water" for more information.

[edit] Water, ice and vapor

[edit] Heat capacity and heats of vaporization and fusion

Heat of vaporization
Temperature (°C)
Hv (kJ mol−1)[14]

0 45.054

25 43.99

526
40 43.35

60 42.482

80 41.585

100 40.657

120 39.684

140 38.643

160 37.518

180 36.304

200 34.962

220 33.468

240 31.809

260 29.93

280 27.795

300 25.3

320 22.297

527
340 18.502

360 12.966

374 2.066

Main article: Enthalpy of vaporization

Water has the second highest specific heat capacity of all known substances, after ammonia, as well as a
high heat of vaporization (40.65 kJ·mol−1), both of which are a result of the extensive hydrogen bonding
between its molecules. These two unusual properties allow water to moderate Earth's climate by buffering
large fluctuations in temperature. Per Josh Willis, NASA's Jet Propulsion Laboratory the oceans absorb
one thousand times more heat than the atmosphere (air) and is holding 80 to 90% of global warming
heat.[15]

The specific enthalpy of fusion of water is 333.55 kJ·kg−1 at 0 °C. Of common substances, only that of
ammonia is higher. This property confers resistance to melting upon the ice of glaciers and drift ice.
Before the advent of mechanical refrigeration, ice was in common use to retard food spoilage (and still
is).

Constant-pressure heat capacity


Temperature (°C)
Cp (J/(g·K) at 100 kPa)[16]

0 4.2176

10 4.1921

20 4.1818

30 4.1784

40 4.1785

50 4.1806

528
60 4.1843

70 4.1895

80 4.1963

90 4.205

100 4.2159

Note that the specific heat capacity of ice at –10 °C is about 2.05 J/(g·K) and that the heat capacity of
steam at 100 °C is about 2.080 J/(g·K).

[edit] Density of water and ice Density of liquid water

Temp (°C) Density (kg/m3)[17][18]

+100 958.4

+80 971.8

+60 983.2

+40 992.2

+30 995.6502

+25 997.0479

+22 997.7735

+20 998.2071

+15 999.1026

+10 999.7026

+4 999.9720

529
The density of water is approximately one gram per 0 999.8395
cubic centimeter. More precisely, it is dependent on
its temperature, but the relation is not linear and is −10 998.117
not even monotonic (see right-hand table). When
cooled from room temperature liquid water becomes
−20 993.547
increasingly dense, just like other substances. But at
approximately 4 °C, pure water reaches its
maximum density. As it is cooled further, it expands −30 983.854
to become less dense. This unusual negative thermal
expansion is attributed to strong, orientation- The values below 0 °C refer to supercooled water.
dependent, intermolecular interactions and is also
observed in molten silica.[19]

The solid form of most substances is denser than the liquid phase; thus, a block of the solid will sink in
the liquid. But, by contrast, a block of ice floats in liquid water because ice is less dense than liquid water.
Upon freezing, the density of water decreases by about 9%.[20] The reason for this is the 'cooling' of
intermolecular vibrations allowing the molecules to form steady hydrogen bonds with their neighbors and
thereby gradually locking into positions reminiscent of the hexagonal packing achieved upon freezing to
ice Ih. While the hydrogen bonds are shorter in the crystal than in the liquid, this locking effect reduces
the average coordination number of molecules as the liquid approaches nucleation. Other substances that
expand on freezing are silicon, gallium, germanium, antimony, bismuth, plutonium and other compounds
that form spacious crystal lattices with tetrahedral coordination.

Only ordinary, hexagonal ice is less dense than the liquid. Under increasing pressure ice undergoes a
number of transitions to other allotropic forms with higher density than liquid water, such as high density
amorphous ice (HDA) and very high density amorphous ice (VHDA).

Water also expands significantly as the temperature increases. Its density decreases by 4% from its
highest value when approaching the boiling point.

The melting point of ice is 0 °C (32 °F, 273 K) at standard pressure, however, pure liquid water can be
supercooled well below that temperature without freezing if the liquid is not mechanically disturbed. It
can remain in a fluid state down to its homogeneous nucleation point of approximately 231 K (−42 °C).[21]
The melting point of ordinary hexagonal ice falls slightly under moderately high pressures, but as ice
transforms into its allotropes (see crystalline states of ice) above 209.9 MPa (2,072 atm), the melting
point increases markedly with pressure, i.e. reaching 355 K (82 °C) at 2.216 GPa (21,870 atm) (triple
point of Ice VII[22]).

A significant increase of pressure is required to lower the melting point of ordinary ice —the pressure
exerted by an ice skater on the ice would only reduce the melting point by approximately 0.09 °C (0.16
°F).[citation needed]

These properties of water have important consequences in its role in the ecosystem of Earth. Water of a
temperature of 4 °C will always accumulate at the bottom of fresh water lakes, irrespective of the
temperature in the atmosphere. Since water and ice are poor conductors of heat [23] (good insulators) it is
unlikely that sufficiently deep lakes will freeze completely, unless stirred by strong currents that would
mix cooler and warmer water and accelerate the cooling. In warming weather, chunks of ice float, rather
than sink to the bottom where they might melt extremely slowly. These phenomena thus may preserve
aquatic life.

530
[edit] Density of saltwater and ice

WOA surface density.

The density of water is dependent on the dissolved salt content as well as the temperature of the water. Ice
still floats in the oceans, otherwise they would freeze from the bottom up. However, the salt content of
oceans lowers the freezing point by about 2 °C (see following paragraph for explanation) and lowers the
temperature of the density maximum of water to the freezing point. That is why, in ocean water, the
downward convection of colder water is not blocked by an expansion of water as it becomes colder near
the freezing point. The oceans' cold water near the freezing point continues to sink. For this reason, any
creature attempting to survive at the bottom of such cold water as the Arctic Ocean generally lives in
water that is 4 °C colder than the temperature at the bottom of frozen-over fresh water lakes and rivers in
the winter.

In cold countries, when the temperature of the ocean reaches at 4°C, the layers of water near the top in
contact with cold air continue to lose heat energy and their temperature falls below 4°C. On cooling
below 4°C, these layers do not sink but may rise up as water has maximum density at 4°C. (Refer:
Polarity and hydrogen bonding ) Due to this, the layer of water having 4°C remains at the bottom and
above this layers of water 3°C, 2°C, 1°C and 0°C are formed. Since the ice is bad conductor of heat, it
does not allow heat energy from the water beneath the layer of ice which prevents water to freeze. And,
hence aquatic creatures survive in such places.[citation needed]

As the surface of salt water begins to freeze (at −1.9 °C for normal salinity seawater, 3.5%) the ice that
forms is essentially salt free with a density approximately equal to that of freshwater ice. This ice floats
on the surface and the salt that is "frozen out" adds to the salinity and density of the seawater just below
it, in a process known as brine rejection. This denser saltwater sinks by convection and the replacing
seawater is subject to the same process. This provides essentially freshwater ice at −1.9 °C on the surface.
The increased density of the seawater beneath the forming ice causes it to sink towards the bottom. On a
large scale, the process of brine rejection and sinking cold salty water results in ocean currents forming to
transport such water away from the pole. One potential consequence of global warming is that the loss of
Arctic ice could result in the loss of these currents as well, which could have unforeseeable consequences
on near and distant climates.

531
[edit] Miscibility and condensation

Red line shows saturation

Main article: Humidity

Water is miscible with many liquids, for example ethanol in all proportions, forming a single
homogeneous liquid. On the other hand water and most oils are immiscible usually forming layers
according to increasing density from the top.

As a gas, water vapor is completely miscible with air. On the other hand the maximum water vapor
pressure that is thermodynamically stable with the liquid (or solid) at a given temperature is relatively low
compared with total atmospheric pressure. For example, if the vapor partial pressure[24] is 2% of
atmospheric pressure and the air is cooled from 25 °C, starting at about 22 °C water will start to condense,
defining the dew point, and creating fog or dew. The reverse process accounts for the fog burning off in
the morning. If one raises the humidity at room temperature, say by running a hot shower or a bath, and
the temperature stays about the same, the vapor soon reaches the pressure for phase change, and
condenses out as steam.

A gas in this context is referred to as saturated or 100% relative humidity, when the vapor pressure of
water in the air is at the equilibrium with vapor pressure due to (liquid) water; water (or ice, if cool
enough) will fail to lose mass through evaporation when exposed to saturated air. Because the amount of
water vapor in air is small, relative humidity, the ratio of the partial pressure due to the water vapor to the
saturated partial vapor pressure, is much more useful. Water vapor pressure above 100% relative humidity
is called super-saturated and can occur if air is rapidly cooled, say by rising suddenly in an updraft.[25]

532
[edit] Vapor pressure

Vapor pressure diagrams of water

Main article: Vapor pressure of water

Temperature Pressure[26]

°C K °F Pa atm torr in Hg psi

0 273 32 611 0.00603 4.58 0.180 0.0886

5 278 41 872 0.00861 6.54 0.257 0.1265

10 283 50 1,228 0.01212 9.21 0.363 0.1781

12 285 54 1,403 0.01385 10.52 0.414 0.2034

14 287 57 1,599 0.01578 11.99 0.472 0.2318

16 289 61 1,817 0.01793 13.63 0.537 0.2636

17 290 63 1,937 0.01912 14.53 0.572 0.2810

18 291 64 2,064 0.02037 15.48 0.609 0.2993

19 292 66 2,197 0.02168 16.48 0.649 0.3187

20 293 68 2,338 0.02307 17.54 0.691 0.3392

21 294 70 2,486 0.02453 18.65 0.734 0.3606

533
22 295 72 2,644 0.02609 19.83 0.781 0.3834

23 296 73 2,809 0.02772 21.07 0.830 0.4074

24 297 75 2,984 0.02945 22.38 0.881 0.4328

25 298 77 3,168 0.03127 23.76 0.935 0.4594

[edit] Compressibility

The compressibility of water is a function of pressure and temperature. At 0 °C, in the limit of zero
pressure, the compressibility is 5.1×10−10
Pa−1.[27] In the zero-pressure limit, the compressibility reaches a minimum of 4.4×10−10
Pa−1 around 45 °C before increasing again with increasing temperature. As the pressure is increased, the
compressibility decreases, being 3.9×10−10
Pa−1 at 0 °C and 100 MPa.

The bulk modulus of water is 2.2 GPa.[28] The low compressibility of non-gases, and of water in
particular, leads to their often being assumed as incompressible. The low compressibility of water means
that even in the deep oceans at 4 km depth, where pressures are 40 MPa, there is only a 1.8% decrease in
volume.[28]

[edit] Triple point


The various triple points of water[29]

Phases in stable equilibrium Pressure Temperature

liquid water, ice Ih, and water vapor 611.73 Pa 273.16 K (0.01 °C)

liquid water, ice Ih, and ice III 209.9 MPa 251 K (−22 °C)

liquid water, ice III, and ice V 350.1 MPa −17.0 °C

liquid water, ice V, and ice VI 632.4 MPa 0.16 °C

ice Ih, Ice II, and ice III 213 MPa −35 °C

ice II, ice III, and ice V 344 MPa −24 °C

ice II, ice V, and ice VI 626 MPa −70 °C

The temperature and pressure at which solid, liquid, and gaseous water coexist in equilibrium is called the
triple point of water. This point is used to define the units of temperature (the kelvin, the SI unit of
thermodynamic temperature and, indirectly, the degree Celsius and even the degree Fahrenheit).

As a consequence, water's triple point temperature is a prescribed value rather than a measured quantity.

534
water phase diagram: Y-axis = Pressure in pascals (10n), X-axis = temperature in kelvins, S = solid, L =
liquid, V = vapor, CP = critical point, TP = triple point of water

The triple point is at a temperature of 273.16 K (0.01 °C) by convention, and at a pressure of 611.73 Pa.
This pressure is quite low, about 1⁄166 of the normal sea level barometric pressure of 101,325 Pa. The
atmospheric surface pressure on planet Mars is remarkably close to the triple point pressure, and the zero-
elevation or "sea level" of Mars is defined by the height at which the atmospheric pressure corresponds to
the triple point of water.

Although it is commonly named as "the triple point of water", the stable combination of liquid water, ice
I, and water vapor is but one of several triple points on the phase diagram of water. Gustav Heinrich
Johann Apollon Tammann in Göttingen produced data on several other triple points in the early 20th
century. Kamb and others documented further triple points in the 1960s.[29][30][31]

[edit] Electrical properties

[edit] Electrical conductivity

Pure water containing no ions is an excellent insulator, but not even "deionized" water is completely free
of ions. Water undergoes auto-ionization in the liquid state. Further, because water is such a good solvent,
it almost always has some solute dissolved in it, most frequently a salt. If water has even a tiny amount of
such an impurity, then it can conduct electricity readily, as impurities such as salt separate into free ions
in aqueous solution by which an electric current can flow.[citation needed]

It is known that the theoretical maximum electrical resistivity for water is approximately 182 kΩ·m at 25
°C. This figure agrees well with what is typically seen on reverse osmosis, ultra-filtered and deionized
ultra-pure water systems used, for instance, in semiconductor manufacturing plants. A salt or acid
contaminant level exceeding even 100 parts per trillion (ppt) in ultra-pure water begins to noticeably
lower its resistivity level by up to several kOhm·m (or hundreds of nanosiemens per meter).[citation needed]

535
The low electrical conductivity of water increases significantly upon solvation of a small amount of ionic
material, such as hydrogen chloride or any salt.

Any electrical conductivity observable in water is the result of ions of mineral salts and carbon dioxide
dissolved in it. Carbon dioxide forms carbonate ions in water. Water self-ionizes, where two water
molecules form one hydroxide anion and one hydronium cation, but not enough to carry enough electric
current to do any work or harm for most operations. In pure water, sensitive equipment can detect a very
slight electrical conductivity of 0.055 µS/cm at 25 °C. Water can also be electrolyzed into oxygen and
hydrogen gases but in the absence of dissolved ions this is a very slow process, as very little current is
conducted. While electrons are the primary charge carriers in water (and metals), in ice the primary
charge carriers are protons (see proton conductor).[citation needed]

[edit] Electrolysis
Main article: Electrolysis of water

Water can be split into its constituent elements, hydrogen and oxygen, by passing an electric current
through it. This process is called electrolysis. Water molecules naturally dissociate into H+ and OH− ions,
which are attracted toward the cathode and anode, respectively. At the cathode, two H+ ions pick up
electrons and form H2 gas. At the anode, four OH− ions combine and release O2 gas, molecular water, and
four electrons. The gases produced bubble to the surface, where they can be collected. The standard
potential of the water electrolysis cell is 1.23 V at 25 °C.

[edit] Polarity and hydrogen bonding

Model of hydrogen bonds between molecules of water

An important feature of water is its polar nature. The water molecule forms an angle, with hydrogen
atoms at the tips and oxygen at the vertex. Since oxygen has a higher electronegativity than hydrogen, the
side of the molecule with the oxygen atom has a partial negative charge. An object with such a charge
difference is called a dipole. The charge differences cause water molecules to be attracted to each other
(the relatively positive areas being attracted to the relatively negative areas) and to other polar molecules.
This attraction contributes to hydrogen bonding, and explains many of the properties of water, such as
solvent action.[32]

536
A water molecule can form a maximum of four hydrogen bonds because it can accept two and donate two
hydrogen atoms. Other molecules like hydrogen fluoride, ammonia, methanol form hydrogen bonds but
they do not show anomalous behavior of thermodynamic, kinetic or structural properties like those
observed in water. The answer to the apparent difference between water and other hydrogen bonding
liquids lies in the fact that apart from water none of the hydrogen bonding molecules can form four
hydrogen bonds either due to an inability to donate/accept hydrogens or due to steric effects in bulky
residues. In water local tetrahedral order due to the four hydrogen bonds gives rise to an open structure
and a 3-dimensional bonding network, resulting in the anomalous decrease of density when cooled below
4 °C.

Although hydrogen bonding is a relatively weak attraction compared to the covalent bonds within the
water molecule itself, it is responsible for a number of water's physical properties. One such property is its
relatively high melting and boiling point temperatures; more energy is required to break the hydrogen
bonds between molecules. The similar compound hydrogen sulfide (H2S), which has much weaker
hydrogen bonding, is a gas at room temperature even though it has twice the molecular mass of water.
The extra bonding between water molecules also gives liquid water a large specific heat capacity. This
high heat capacity makes water a good heat storage medium (coolant) and heat shield.

[edit] Cohesion and adhesion

Dew drops adhering to a spider web

Water molecules stay close to each other (cohesion), due to the collective action of hydrogen bonds
between water molecules. These hydrogen bonds are constantly breaking, with new bonds being formed
with different water molecules; but at any given time in a sample of liquid water, a large portion of the
molecules are held together by such bonds.[33]

Water also has high adhesion properties because of its polar nature. On extremely clean/smooth glass the
water may form a thin film because the molecular forces between glass and water molecules (adhesive
forces) are stronger than the cohesive forces. In biological cells and organelles, water is in contact with
membrane and protein surfaces that are hydrophilic; that is, surfaces that have a strong attraction to water.
Irving Langmuir observed a strong repulsive force between hydrophilic surfaces. To dehydrate
hydrophilic surfaces—to remove the strongly held layers of water of hydration—requires doing
substantial work against these forces, called hydration forces. These forces are very large but decrease
rapidly over a nanometer or less. They are important in biology, particularly when cells are dehydrated by
exposure to dry atmospheres or to extracellular freezing.[34]

537
[edit] Surface tension
Main article: Surface tension

This paper clip is under the water level, which has risen gently and smoothly. Surface tension prevents the
clip from submerging and the water from overflowing the glass edges.

Temperature dependence of the surface tension of pure water

Water has a high surface tension of 72.8 mN/m at room temperature, caused by the strong cohesion
between water molecules, the highest of the non-metallic liquids. This can be seen when small quantities
of water are placed onto a sorption-free (non-adsorbent and non-absorbent) surface, such as polyethylene
or Teflon, and the water stays together as drops. Just as significantly, air trapped in surface disturbances
forms bubbles, which sometimes last long enough to transfer gas molecules to the water.[citation needed]

Another surface tension effect is capillary waves, which are the surface ripples that form around the
impacts of drops on water surfaces, and sometimes occur with strong subsurface currents flowing to the
water surface. The apparent elasticity caused by surface tension drives the waves.

[edit] Capillary action


Main article: Capillary action

Due to an interplay of the forces of adhesion and surface tension, water exhibits capillary action whereby
water rises into a narrow tube against the force of gravity. Water adheres to the inside wall of the tube and
surface tension tends to straighten the surface causing a surface rise and more water is pulled up through
cohesion. The process continues as the water flows up the tube until there is enough water such that
gravity balances the adhesive force.

538
Surface tension and capillary action are important in biology. For example, when water is carried through
xylem up stems in plants, the strong intermolecular attractions (cohesion) hold the water column together
and adhesive properties maintain the water attachment to the xylem and prevent tension rupture caused by
transpiration pull.

[edit] Water as a solvent


Main article: aqueous solution

Presence of colloidal calcium carbonate from high concentrations of dissolved lime turns the water of
Havasu Falls turquoise.

Water is also a good solvent due to its polarity. Substances that will mix well and dissolve in water (e.g.
salts) are known as hydrophilic ("water-loving") substances, while those that do not mix well with water
(e.g. fats and oils), are known as hydrophobic ("water-fearing") substances. The ability of a substance to
dissolve in water is determined by whether or not the substance can match or better the strong attractive
forces that water molecules generate between other water molecules. If a substance has properties that do
not allow it to overcome these strong intermolecular forces, the molecules are "pushed out" from the
water, and do not dissolve. Contrary to the common misconception, water and hydrophobic substances do
not "repel", and the hydration of a hydrophobic surface is energetically, but not entropically, favorable.

When an ionic or polar compound enters water, it is surrounded by water molecules (Hydration). The
relatively small size of water molecules typically allows many water molecules to surround one molecule
of solute. The partially negative dipole ends of the water are attracted to positively charged components
of the solute, and vice versa for the positive dipole ends.

In general, ionic and polar substances such as acids, alcohols, and salts are relatively soluble in water, and
non-polar substances such as fats and oils are not. Non-polar molecules stay together in water because it
is energetically more favorable for the water molecules to hydrogen bond to each other than to engage in
van der Waals interactions with non-polar molecules.

An example of an ionic solute is table salt; the sodium chloride, NaCl, separates into Na+ cations and Cl−
anions, each being surrounded by water molecules. The ions are then easily transported away from their
crystalline lattice into solution. An example of a nonionic solute is table sugar. The water dipoles make

539
hydrogen bonds with the polar regions of the sugar molecule (OH groups) and allow it to be carried away
into solution.

[edit] Water in acid-base reactions

Chemically, water is amphoteric: it can act as either an acid or a base in chemical reactions. According to
the Brønsted-Lowry definition, an acid is defined as a species which donates a proton (a H+ ion) in a
reaction, and a base as one which receives a proton. When reacting with a stronger acid, water acts as a
base; when reacting with a stronger base, it acts as an acid. For instance, water receives an H+ ion from
HCl when hydrochloric acid is formed:

HCl (acid) + H2O (base) H3O+ + Cl−

In the reaction with ammonia, NH3, water donates a H+ ion, and is thus acting as an acid:

NH3 (base) + H2O (acid) NH+


4 + OH−

Because the oxygen atom in water has two lone pairs, water often acts as a Lewis base, or electron pair
donor, in reactions with Lewis acids, although it can also react with Lewis bases, forming hydrogen bonds
between the electron pair donors and the hydrogen atoms of water. HSAB theory describes water as both
a weak hard acid and a weak hard base, meaning that it reacts preferentially with other hard species:

H+ (Lewis acid) + H2O (Lewis base) → H3O+

Fe3+ (Lewis acid) + H2O (Lewis base) → Fe(H2O)3+


6

Cl− (Lewis base) + H2O (Lewis acid) → Cl(H2O)−


6

When a salt of a weak acid or of a weak base is dissolved in water, water can partially hydrolyze the salt,
producing the corresponding base or acid, which gives aqueous solutions of soap and baking soda their
basic pH:

Na2CO3 + H2O NaOH + NaHCO3

[edit] Ligand chemistry

Water's Lewis base character makes it a common ligand in transition metal complexes, examples of
which range from solvated ions, such as Fe(H2O)3+
6, to perrhenic acid, which contains two water molecules coordinated to a rhenium atom, to various solid
hydrates, such as CoCl2·6H2O. Water is typically a monodentate ligand, it forms only one bond with the
central atom.

540
[edit] Organic chemistry

As a hard base, water reacts readily with organic carbocations, for example in hydration reaction, in
which a hydroxyl group (OH−) and an acidic proton are added to the two carbon atoms bonded together in
the carbon-carbon double bond, resulting in an alcohol. When addition of water to an organic molecule
cleaves the molecule in two, hydrolysis is said to occur. Notable examples of hydrolysis are
saponification of fats and digestion of proteins and polysaccharides. Water can also be a leaving group in
SN2 substitution and E2 elimination reactions, the latter is then known as dehydration reaction.

[edit] Acidity in nature

Pure water has the concentration of hydroxide ions (OH−) equal to that of the hydronium (H3O+) or
hydrogen (H+) ions, which gives pH of 7 at 298 K. In practice, pure water is very difficult to produce.
Water left exposed to air for any length of time will dissolve carbon dioxide, forming a dilute solution of
carbonic acid, with a limiting pH of about 5.7. As cloud droplets form in the atmosphere and as raindrops
fall through the air minor amounts of CO2 are absorbed and thus most rain is slightly acidic. If high
amounts of nitrogen and sulfur oxides are present in the air, they too will dissolve into the cloud and rain
drops producing acid rain.

[edit] Water in redox reactions

Water contains hydrogen in oxidation state +1 and oxygen in oxidation state −2. Because of that, water
oxidizes chemicals with reduction potential below the potential of H+/H2, such as hydrides, alkali and
alkaline earth metals (except for beryllium), etc. Some other reactive metals, such as aluminum, are
oxidized by water as well, but their oxides are not soluble, and the reaction stops because of passivation.
Note, however, that rusting of iron is a reaction between iron and oxygen, dissolved in water, not between
iron and water.

2 Na + 2 H2O → 2 NaOH + H2

Water can be oxidized itself, emitting oxygen gas, but very few oxidants react with water even if their
reduction potential is greater than the potential of O2/O2−. Almost all such reactions require a catalyst[35]

4 AgF2 + 2 H2O → 4 AgF + 4 HF + O2

[edit] Geochemistry

Action of water on rock over long periods of time typically leads to weathering and water erosion,
physical processes that convert solid rocks and minerals into soil and sediment, but under some
conditions chemical reactions with water occur as well, resulting in metasomatism or mineral hydration, a
type of chemical alteration of a rock which produces clay minerals in nature and also occurs when
Portland cement hardens.

Water ice can form clathrate compounds, known as clathrate hydrates, with a variety of small molecules
that can be embedded in its spacious crystal lattice. The most notable of these is methane clathrate,
4CH4·23H2O, naturally found in large quantities on the ocean floor.

[edit] Transparency

541
Main article: Water absorption

Water is relatively transparent to visible light, near ultraviolet light, and far-red light, but it absorbs most
ultraviolet light, infrared light, and microwaves. Most photoreceptors and photosynthetic pigments utilize
the portion of the light spectrum that is transmitted well through water. Microwave ovens take advantage
of water's opacity to microwave radiation to heat the water inside of foods. The very weak onset of
absorption in the red end of the visible spectrum lends water its intrinsic blue hue (see Color of water).

[edit] Heavy water and isotopologues

Several isotopes of both hydrogen and oxygen exist, giving rise to several known isotopologues of water.

Hydrogen occurs naturally in three isotopes. The most common (1H) accounting for more than 99.98% of
hydrogen in water, consists of only a single proton in its nucleus. A second, stable isotope, deuterium
(chemical symbol D or 2H), has an additional neutron. Deuterium oxide, D2O, is also known as heavy
water because of its higher density. It is used in nuclear reactors as a neutron moderator. The third
isotope, tritium, has 1 proton and 2 neutrons, and is radioactive, decaying with a half-life of 4500 days.
T2O exists in nature only in minute quantities, being produced primarily via cosmic ray-induced nuclear
reactions in the atmosphere. Water with one deuterium atom HDO occurs naturally in ordinary water in
low concentrations (~0.03%) and D2O in far lower amounts (0.000003%).

The most notable physical differences between H2O and D2O, other than the simple difference in specific
mass, involve properties that are affected by hydrogen bonding, such as freezing and boiling, and other
kinetic effects. The difference in boiling points allows the isotopologues to be separated.

Consumption of pure isolated D2O may affect biochemical processes - ingestion of large amounts impairs
kidney and central nervous system function. Small quantities can be consumed without any ill-effects, and
even very large amounts of heavy water must be consumed for any toxicity to become apparent.

Oxygen also has three stable isotopes, with 16O present in 99.76%, 17O in 0.04%, and 18O in 0.2% of water
molecules.[36]

[edit] History

The first decomposition of water into hydrogen and oxygen, by electrolysis, was done in 1800 by an
English chemist William Nicholson. In 1805, Joseph Louis Gay-Lussac and Alexander von Humboldt
showed that water is composed of two parts hydrogen and one part oxygen.

Gilbert Newton Lewis isolated the first sample of pure heavy water in 1933.

The properties of water have historically been used to define various temperature scales. Notably, the
Kelvin, Celsius, Rankine, and Fahrenheit scales were, or currently are, defined by the freezing and boiling
points of water. The less common scales of Delisle, Newton, Réaumur and Rømer were defined similarly.
The triple point of water is a more commonly used standard point today.[37]

[edit] Systematic naming

The accepted IUPAC name of water is oxidane[38] or simply water, or its equivalent in different
languages, although there are other systematic names which can be used to describe the molecule.[39]

542
The simplest systematic name of water is hydrogen oxide. This is analogous to related compounds such as
hydrogen peroxide, hydrogen sulfide, and deuterium oxide (heavy water). Another systematic name,
oxidane, is accepted by IUPAC as a parent name for the systematic naming of oxygen-based substituent
groups,[40] although even these commonly have other recommended names. For example, the name
hydroxyl is recommended over oxidanyl for the –OH group. The name oxane is explicitly mentioned by
the IUPAC as being unsuitable for this purpose, since it is already the name of a cyclic ether also known
as tetrahydropyran.

The polarized form of the water molecule, H+OH−, is also called hydron hydroxide by IUPAC
nomenclature.[41]

Dihydrogen monoxide (DHMO) is a rarely used name of water. This term has been used in various hoaxes
that call for this "lethal chemical" to be banned, such as in the dihydrogen monoxide hoax. Other
systematic names for water include hydroxic acid, hydroxylic acid, and hydrogen hydroxide. Both acid
and alkali names exist for water because it is amphoteric (able to react both as an acid or an alkali). None
of these exotic names are used widely.

543
544

Você também pode gostar