Você está na página 1de 28

Conductors

For the purpose of electronics and electrical engineering, materials are classified
according to their electrical resistance, which describes how readily they allow electric
current to pass when a voltage is applied. Apart from conductors, materials are classed as
insulators (very poor conductors), semi-conductors (materials whose ability to conduct
electricity can be controlled), and superconductors which (below a critical temperature,
usually cryogenic) offer no significant electrical resistance, allowing circular currents,
once established, to flow indefinitely.

Details

Note: The following applies to direct current only. When the direction of voltage/current
alternates, other effects (inductance and capacitance) come into play also.

All conductors contain movable electric charges which will move when an electric
potential difference (measured in volts) is applied across separate points on a wire (etc)
made from the material. This flow of charge (measured in amperes) is what is meant by
electric current. In most materials, the amount of current is proportional to the voltage
(Ohm's law) provided the temperature remains constant and the material remains in the
same shape and state. The ratio between the voltage and the current is called the
resistance (measured in ohms) of the object between the points where the voltage was
applied. The resistance across a standard mass (and shape) of a material at a given
temperature is called the resistivity of the material. The inverse of resistance and
resistivity is conductance and conductivity.

Most familiar conductors are metallic. Copper is the most common material for electrical
wiring, and gold for high-quality surface-to-surface contacts. However, there are also
many non-metallic conductors, including graphite, solutions of salts, and all plasmas. See
electrical conduction for more information on the physical mechanism for charge flow in
materials.
Non-conducting materials lack mobile charges, and so resist the flow of electric current,
generating heat. In fact, all materials offer some resistance and warm up when a current
flows. Thus, proper design of an electrical conductor takes into account the temperature
that the conductor needs to be able to endure without damage, as well as the quantity of
electrical current. The motion of charges also creates an electromagnetic field around the
conductor that exerts a mechanical radial squeezing force on the conductor. A conductor
of a given material and volume (length x cross-sectional area) has no real limit to the
current it can carry without being destroyed as long as the heat generated by the resistive
loss is removed and the conductor can withstand the radial forces. This effect is
especially critical in printed circuits, where conductors are relatively small and close
together, and inside an enclosure: the heat produced, if not properly removed, can cause
fusing (melting) of the tracks.

Since all conductors have some resistance, and all insulators will carry some current,
there is no theoretical dividing line between conductors and insulators. However, there is
a large gap between the conductance of materials that will carry a useful current at
working voltages and those that will carry a negligible current for the purpose in hand, so
the categories of insulator and conductor do have practical utility.

Thermal and electrical conductivity often go together (for instance, most metals are both
electrical and thermal conductors). However, some materials are practical electrical
conductors without being a good thermal conductor.

Conductor materials

Of the metals commonly used for conductors, copper, has a high conductivity. Silver is
more conductive, but due to cost it is not practical in most cases. However, it is used in
specialized equipment, such as satellites, and as a thin plating to mitigate skin effect
losses at high frequencies. Because of its ease of connection by soldering or clamping,
copper is still the most common choice for most light-gauge wires.

Compared to copper, aluminium has worse conductivity per unit volume, but better
conductivity per unit weight. In many cases, weight is more important than volume
making aluminium the 'best' conductor material for certain applications. For example, it
is commonly used for large-scale power distribution conductors such as overhead power
lines. In many such cases, aluminium is used over a steel core that provides much greater
tensile strength than would the aluminium alone

Gold is occasionally used for very fine wires such as those used to wire bond integrated
circuits to their lead frames. The contacts in electrical connectors are also commonly gold
plated or gold flashed (over nickel). Silver is a better conductor than gold, however, gold
is very resistant to the surface corrosion that is commonly suffered by copper, silver, or
tin/lead alloys. This corrosion would have a very detrimental effect on connection quality
over time; gold plating avoids that.

Conductor voltage

The voltage on a conductor is determined by the connected circuitry and has nothing to
do with the conductor itself. Conductors are usually surrounded by and/or supported by
insulators and the insulation determines the maximum voltage that can be applied to any
given conductor.

Conductor ampacity

The ampacity of a conductor, that is, the amount of current it can carry, is related to its
electrical resistance: a lower-resistance conductor can carry more current. The resistance,
in turn, is determined by the material the conductor is made from (as described above)
and the conductor's size. For a given material, conductors with a larger cross-sectional
area have less resistance than conductors with a smaller cross-sectional area.

For bare conductors, the ultimate limit is the point at which power lost to resistance
causes the conductor to melt. Aside from fuses, most conductors in the real world are
operated far below this limit, however. For example, household wiring is usually
insulated with PVC insulation that is only rated to operate to about 60 C, therefore, the
current flowing in such wires must be limited so that it never heats the copper conductor
above 60 C, causing a risk of fire. Other, more expensive insulations such as Teflon or
fiberglass may allow operation at much higher temperatures.

Semiconductor

A semiconductor is a solid that has electrical conductivity in between that of a conductor


and that of an insulator, and can be controlled over a wide range, either permanently or
dynamically. Semiconductors are tremendously important in technology. Semiconductor
devices, electronic components made of semiconductor materials, are essential in modern
electrical devices. Examples range from computers to cellular phones to digital audio
players. Silicon is used to create most semiconductors commercially, but dozens of other
materials are used as well.

Overview

Semiconductors are very similar to insulators. The two categories of solids differ
primarily in that insulators have larger band gaps — energies that electrons must acquire
to be free to move from atom to atom. In semiconductors at room temperature, just as in
insulators, very few electrons gain enough thermal energy to leap the band gap from the
valence band to the conduction band, which is necessary for electrons to be available for
electric current conduction. For this reason, pure semiconductors and insulators in the
absence of applied electric fields, have roughly similar resistance. The smaller bandgaps
of semiconductors, however, allow for other means besides temperature to control their
electrical properties.

Semiconductors' intrinsic electrical properties are often permanently modified by


introducing impurities by a process known as doping. Usually, it is sufficient to
approximate that each impurity atom adds one electron or one "hole" (a concept to be
discussed later) that may flow freely. Upon the addition of a sufficiently large proportion
of impurity dopants, semiconductors will conduct electricity nearly as well as metals.
Depending on the kind of impurity, a doped region of semiconductor can have more
electrons or holes, and is named N-type or P-type semiconductor material, respectively.
Junctions between regions of N- and P-type semiconductors create electric fields, which
cause electrons and holes to be available to move away from them, and this effect is
critical to semiconductor device operation. Also, a density difference in the amount of
impurities produces a small electric field in the region which is used to accelerate non-
equilibrium electrons or holes.

In addition to permanent modification through doping, the resistance of semiconductors


is normally modified dynamically by applying electric fields. The ability to control
resistance/conductivity in regions of semiconductor material dynamically through the
application of electric fields is the feature that makes semiconductors useful. It has led to
the development of a broad range of semiconductor devices, like transistors and diodes.
Semiconductor devices that have dynamically controllable conductivity, such as
transistors, are the building blocks of integrated circuits devices like the microprocessor.
These "active" semiconductor devices (transistors) are combined with passive
components implemented from semiconductor material such as capacitors and resistors,
to produce complete electronic circuits.

In most semiconductors, when electrons lose enough energy to fall from the conduction
band to the valence band (the energy levels above and below the band gap), they often
emit light, a quantum of energy in the visible electromagnetic spectrum. This
photoemission process underlies the light-emitting diode (LED) and the semiconductor
laser, both of which are very important commercially. Conversely, semiconductor
absorption of light in photodetectors excites electrons to move from the valence band to
the higher energy conduction band, thus facilitating detection of light and vary with its
intensity. This is useful for fiber optic communications, and providing the basis for
energy from solar cells.

Semiconductors may be elemental materials such as silicon and germanium, or


compound semiconductors such as gallium arsenide and indium phosphide, or alloys such
as silicon germanium or aluminium gallium arsenide.

Band structure
Band structure of a semiconductor showing a full valence band and an empty conduction
band.

There are three popular ways to describe the electronic structure of a crystal. The first
starts from single atoms. An atom has discrete energy levels. When two atoms come
close each energy levels splits into an upper and a lower level, whereby they delocalize
across the two atoms. With more atoms the number of levels increases, and groups of
levels form bands. Semiconductors contain many bands. If there is a large distance
between the highest occupied state and the lowest unoccupied space, then a gap will
likely remain between occupied and unoccupied bands even after band formation.

A second way starts with free electrons waves. When fading in an electrostatic potential
due to the cores, due to Bragg reflection some waves are reflected and cannot penetrate
the bulk, that is a band gap opens. In this description it is not clear, while the number of
electrons fills up exactly all states below the gap.

A third description starts with two atoms. The split states form a covalent bond where
two electrons with spin up and spin down are mostly in between the two atoms. Adding
more atoms now is supposed not to lead to splitting, but to more bonds. This is the way
silicon is typically drawn. The band gap is now formed by lifting one electron from the
lower electron level into the upper level. This level is known to be anti-bonding, but bulk
silicon has not been seen to lose atoms as easy as electrons are wandering through it.
Also this model is most unsuitable to explain how in graded hetero-junction the band gap
can vary smoothly.

Like in other solids, the electrons in semiconductors can have energies only within
certain bands (ie. ranges of levels of energy) between the energy of the ground state,
corresponding to electrons tightly bound to the atomic nuclei of the material, and the free
electron energy, which is the energy required for an electron to escape entirely from the
material. The energy bands each correspond to a large number of discrete quantum states
of the electrons, and most of the states with low energy (closer to the nucleus) are full, up
to a particular band called the valence band. Semiconductors and insulators are
distinguished from metals because the valence band in the semiconductor materials is
very nearly full under usual operating conditions, thus causing more electrons to be
available in the conduction band.

The ease with which electrons in a semiconductor can be excited from the valence band
to the conduction band depends on the band gap between the bands, and it is the size of
this energy bandgap that serves as an arbitrary dividing line (roughly 4 eV) between
semiconductors and insulators.

The electrons must move between states to conduct electric current, and so due to the
Pauli exclusion principle full bands do not contribute to the electrical conductivity.
However, as the temperature of a semiconductor rises above absolute zero, the range of
energy values of the electrons in a given band are increased, and some electrons are likely
to be found in with energy states of the conduction band, which is the band immediately
above the valence band. The current-carrying electrons in the conduction band are known
as "free electrons", although they are often simply called "electrons" if context allows this
usage to be clear.

Electrons excited to the conduction band also leave behind electron holes, or unoccupied
states in the valence band. Both the conduction band electrons and the valence band holes
contribute to electrical conductivity. The holes themselves don't actually move, but a
neighbouring electron can move to fill the hole, leaving a hole at the place it has just
come from, and in this way the holes appear to move, and the holes behave as if they
were actual positively charged particles.

One covalent bond between neighboring atoms in the solid is ten times stronger than the
binding of the single electron to the atom, so freeing the electron does not imply to
destroy the crystal structure.

The notion of holes, which was introduced for semiconductors, can also be applied to
metals, where the Fermi level lies within the conduction band. With most metals the Hall
effect reveals electrons to be the charge carriers, but some metals have a mostly filled
conduction band, and the Hall effect reveals positive charge carriers, which are not the
ion-cores, but holes. Contrast this to some conductors like solutions of salts, or plasma. In
the case of a metal, only a small amount of energy is needed for the electrons to find
other unoccupied states to move into, and hence for current to flow. Sometimes even in
this case it may be said that a hole was left behind, to explain why the electron does not
fall back to lower energies: It cannot find a hole. In the end in both materials electron-
phonon scattering and defects are the dominant causes for resistance.

Fermi-Dirac distribution. States with energy ε below the Fermi energy, here μ, have
higher probability n to be occupied, and those above are less likely to be occupied.
Smearing of the distribution increases with temperature.

The energy distribution of the electrons determines which of the states are filled and
which are empty. This distribution is described by Fermi-Dirac statistics. The distribution
is characterized by the temperature of the electrons, and the Fermi energy or Fermi level.
Under absolute zero conditions the Fermi energy can be thought of as the energy up to
which available electron states are occupied. At higher temperatures, the Fermi energy is
the energy at which the probability of a state being occupied has fallen to 0.5.
The dependence of the electron energy distribution on temperature also explains why the
conductivity of a semiconductor has a strong temperature dependency, as a
semiconductor operating at lower temperatures will have fewer available free electrons
and holes able to do the work.

Carrier generation and recombination

When ionizing radiation strikes a semiconductor, it may excite an electron out of its
energy level and consequently leave a hole. This process is known as electron–hole pair
generation. Electron-hole pairs are constantly generated from thermal energy as well, in
the absence of any external energy source.

Electron-hole pairs are also apt to recombine. Conservation of energy demands that these
recombination events, in which an electron loses an amount of energy larger than the
band gap, be accompanied by the emission of thermal energy (in the form of phonons) or
radiation (in the form of photons).

In the steady state, the generation and recombination of electron–hole pairs are in
equipoise. The number of electron-hole pairs in the steady state at a given temperature is
determined by quantum statistical mechanics. The precise quantum mechanical
mechanisms of generation and recombination are governed by conservation of energy
and conservation of momentum.

As the probability that electrons and holes meet together is proportional to the product of
their amounts, the product is in steady state nearly constant at a given temperature,
providing that there is no significant electric field (which might "flush" carriers of both
types, or move them from neighbour regions containing more of them to meet together)
or externally driven pair generation. The product is a function of the temperature, as the
probability of getting enough thermal energy to produce a pair increases with
temperature, being approximately 1/exp(band gap / kT), where k is Boltzmann's constant
and T is absolute temperature.
The probability of meeting is increased by carrier traps – impurities or dislocations which
can trap an electron or hole and hold it until a pair is completed. Such carrier traps are
sometimes purposely added to reduce the time needed to reach the steady state.

Doping

The property of semiconductors that makes them most useful for constructing electronic
devices is that their conductivity may easily be modified by introducing impurities into
their crystal lattice. The process of adding controlled impurities to a semiconductor is
known as doping. The amount of impurity, or dopant, added to an intrinsic (pure)
semiconductor varies its level of conductivity. Doped semiconductors are often referred
to as extrinsic.

Dopants

The materials chosen as suitable dopants depend on the atomic properties of both the
dopant and the material to be doped. In general, dopants that produce the desired
controlled changes are classified as either electron acceptors or donors. A donor atom that
activates (that is, becomes incorporated into the crystal lattice) donates weakly-bound
valence electrons to the material, creating excess negative charge carriers. These weakly-
bound electrons can move about in the crystal lattice relatively freely and can facilitate
conduction in the presence of an electric field. (The donor atoms introduce some states
under, but very close to the conduction band edge. Electrons at these states can be easily
excited to conduction band, becoming free electrons, at room temperature.) Conversely,
an activated acceptor produces a hole. Semiconductors doped with donor impurities are
called n-type, while those doped with acceptor impurities are known as p-type. The n and
p type designations indicate which charge carrier acts as the material's majority carrier.
The opposite carrier is called the minority carrier, which exists due to thermal excitation
at a much lower concentration compared to the majority carrier.

For example, the pure semiconductor silicon has four valence electrons. In silicon, the
most common dopants are IUPAC group 13 (commonly known as group III) and group
15 (commonly known as group V) elements. Group 13 elements all contain three valence
electrons, causing them to function as acceptors when used to dope silicon. Group 15
elements have five valence electrons, which allows them to act as a donor. Therefore, a
silicon crystal doped with boron creates a p-type semiconductor whereas one doped with
phosphorus results in an n-type material.

Effect on band structure

Doping a semiconductor crystal introduces allowed energy states within the band gap but
very close to the energy band that corresponds with the dopant type. In other words,
donor impurities create states near the conduction band while acceptors create states near
the valence band. The gap between these energy states and the nearest energy band is
usually referred to as dopant-site bonding energy or EB and is relatively small. For
example, the EB for boron in silicon bulk is 0.045 eV, compared with silicon's band gap
of about 1.12 eV. Because EB is so small, it takes little energy to ionize the dopant atoms
and create free carriers in the conduction or valence bands. Usually the thermal energy
available at room temperature is sufficient to ionize most of the dopant.

Dopants also have the important effect of shifting the material's Fermi level towards the
energy band that corresponds with the dopant with the greatest concentration. Since the
Fermi level must remain constant in a system in thermodynamic equilibrium, stacking
layers of materials with different properties leads to many useful electrical properties. For
example, the p-n junction's properties are due to the energy band bending that happens as
a result of lining up the Fermi levels in contacting regions of p-type and n-type material.

This effect is shown in a band diagram. The band diagram typically indicates the
variation in the valence band and conduction band edges versus some spatial dimension,
often denoted x. The Fermi energy is also usually indicated in the diagram. Sometimes
the intrinsic Fermi energy, Ei, which is the Fermi level in the absence of doping, is
shown. These diagrams are useful in explaining the operation of many kinds of
semiconductor devices.
Preparation of semiconductor materials

Semiconductors with predictable, reliable electronic properties are necessary for mass
production. The level of chemical purity needed is extremely high because the presence
of impurities even in very small proportions can have large effects on the properties of
the material. A high degree of crystalline perfection is also required, since faults in
crystal structure (such as dislocations, twins, and stacking faults) interfere with the
semiconducting properties of the material. Crystalline faults are a major cause of
defective semiconductor devices. The larger the crystal, the more difficult it is to achieve
the necessary perfection. Current mass production processes use crystal ingots between
four and twelve inches (300 mm) in diameter which are grown as cylinders and sliced
into wafers.

Because of the required level of chemical purity and the perfection of the crystal structure
which are needed to make semiconductor devices, special methods have been developed
to produce the initial semiconductor material. A technique for achieving high purity
includes growing the crystal using the Czochralski process. An additional step that can be
used to further increase purity is known as zone refining. In zone refining, part of a solid
crystal is melted. The impurities tend to concentrate in the melted region, while the
desired material recrystalizes leaving the solid material more pure and with fewer
crystalline faults.

In manufacturing semiconductor devices involving heterojunctions between different


semiconductor materials, the lattice constant, which is the length of the repeating element

of the crystal structure, is important for determining the compatibility of materials.


T h e H i s t o r y o f
S u p e r c o n d u c t o r s

Superconductors, materials that have no resistance to the flow of electricity, are one of
the last great frontiers of scientific discovery. Not only have the limits of
superconductivity not yet been reached, but the theories that explain superconductor
behavior seem to be constantly under review. In 1911 superconductivity was first
observed in mercury by Dutch physicist Heike Kamerlingh Onnes of Leiden University
(shown above). When he cooled it to the temperature of liquid helium, 4 degrees Kelvin
(-452F, -269C), its resistance suddenly disappeared. The Kelvin scale represents an
"absolute" scale of temperature. Thus, it was necessary for Onnes to come within 4
degrees of the coldest temperature that is theoretically attainable to witness the
phenomenon of superconductivity. Later, in 1913, he won a Nobel Prize in physics for
his research in this area.
The next great milestone in understanding how matter behaves at extreme cold
temperatures occurred in 1933. German researchers Walter Meissner (above) and Robert
Ochsenfeld discovered that a superconducting material will repel a magnetic field (below
graphic). A magnet moving by a conductor induces currents in the conductor. This is the
principle on which the electric generator operates. But, in a superconductor the induced
currents exactly mirror the field that would have otherwise penetrated the
superconducting material - causing the magnet to be repulsed. This phenomenon is
known as strong diamagnetism and is today often referred to as the "Meissner effect" (an
eponym). The Meissner effect is so strong that a magnet can actually be levitated over a
superconductive material.

In subsequent decades other superconducting metals, alloys and compounds were


discovered. In 1941 niobium-nitride was found to superconduct at 16 K. In 1953
vanadium-silicon displayed superconductive properties at 17.5 K. And, in 1962 scientists
at Westinghouse developed the first commercial superconducting wire, an alloy of
niobium and titanium (NbTi). High-energy, particle-accelerator electromagnets made of
copper-clad niobium-titanium were then developed in the 1960s at the Rutherford-
Appleton Laboratory in the UK, and were first employed in a superconducting
accelerator at the Fermi lab Tevatron in the US in 1987.

The first widely-accepted theoretical understanding of superconductivity was


advanced in 1957 by American physicists John Bardeen, Leon Cooper, and John
Schrieffer (above). Their Theories of Superconductivity became know as the BCS theory
- derived from the first letter of each man's last name - and won them a Nobel prize in
1972. The mathematically-complex BCS theory explained superconductivity at
temperatures close to absolute zero for elements and simple alloys. However, at higher
temperatures and with different superconductor systems, the BCS theory has
subsequently become inadequate to fully explain how superconductivity is occurring.

Another significant theoretical advancement came in 1962 when Brian D. Josephson


(above), a graduate student at Cambridge University, predicted that electrical current
would flow between 2 superconducting materials - even when they are separated by a
non-superconductor or insulator. His prediction was later confirmed and won him a share
of the 1973 Nobel Prize in Physics. This tunneling phenomenon is today known as the
"Josephson effect" and has been applied to electronic devices such as the SQUID, an
instrument capabable of detecting even the weakest magnetic fields. (Below SQUID
graphic courtesy Quantum Design.)

The 1980's were a decade of unrivaled discovery in the field of superconductivity. In


1964 Bill Little of Stanford University had suggested the possibility of organic (carbon-
based) superconductors. The first of these theoretical superconductors was successfully
synthesized in 1980 by Danish researcher Klaus Bechgaard of the University of
Copenhagen and 3 French team members. (TMTSF)2PF6 had to be cooled to an
incredibly cold 1.2K transition temperature (known as Tc) and subjected to high pressure
to superconduct. But, its mere existence proved the possibility of "designer" molecules -
molecules fashioned to perform in a predictable way.
Then, in 1986, a truly breakthrough discovery was made in the field of
superconductivity. Alex Müller and Georg Bednorz (above), researchers at the IBM
Research Laboratory in Rüschlikon, Switzerland, created a brittle ceramic compound that
superconducted at the highest temperature then known: 30 K. What made this discovery
so remarkable was that ceramics are normally insulators. They don't conduct electricity
well at all. So, researchers had not considered them as possible high-temperature
superconductor candidates. The Lanthanum, Barium, Copper and Oxygen compound that
Müller and Bednorz synthesized, behaved in a not-as-yet-understood way. (Original
article printed in Zeitschrift für Physik Condensed Matter, April 1986.) The discovery of
this first of the superconducting copper-oxides (cuprates) won the 2 men a Nobel Prize
the following year. It was later found that tiny amounts of this material were actually
superconducting at 58 K, due to a small amount of lead having been added as a
calibration standard - making the discovery even more noteworthy.

Müller and Bednorz' discovery triggered a flurry of activity in the field of


superconductivity. Researchers around the world began "cooking" up ceramics of every
imaginable combination in a quest for higher and higher Tc's. In January of 1987 a
research team at the University of Alabama-Huntsville substituted Yttrium for
Lanthanum in the Müller and Bednorz molecule and achieved an incredible 92 K Tc. For
the first time a material (today referred to as YBCO) had been found that would
superconduct at temperatures warmer than liquid nitrogen - a commonly available
coolant. Additional milestones have since been achieved using exotic - and often toxic -
elements in the base perovskite ceramic. The current class (or "system") of ceramic
superconductors with the highest transition temperatures are the mercuric-cuprates. The
first synthesis of one of these compounds was achieved in 1993 by Prof. Dr. Ulker
Onbasli at the University of Colorado and by the team of A. Schilling, M. Cantoni, J. D.
Guo, and H. R. Ott of Zurich, Switzerland. The world record Tc of 138 K is now held by
a thallium-doped, mercuric-cuprate comprised of the elements Mercury, Thallium,
Barium, Calcium, Copper and Oxygen. The Tc of this ceramic superconductor was
confirmed by Dr. Ron Goldfarb at the National Institute of Standards and Technology-
Colorado in February of 1994. Under extreme pressure its Tc can be coaxed up even
higher - approximately 25 to 30 degrees more at 300,000 atmospheres.

In recent years, many discoveries regarding the novel nature of superconductivity


have been made. In 1997 researchers found that at a temperature very near absolute zero
an alloy of gold and indium was both a superconductor and a natural magnet.
Conventional wisdom held that a material with such properties could not exist! Since
then, over a half-dozen such compounds have been found. Recent years have also seen
the discovery of the first high-temperature superconductor that does NOT contain any
copper (2000), and the first all-metal perovskite superconductor (2001).

Also in 2001 a material that had been sitting on laboratory shelves for decades was
found to be an extraordinary new superconductor. Japanese researchers measured the
transition temperature of magnesium diboride at 39 Kelvin - far above the highest Tc of
any of the elemental or binary alloy superconductors. While 39 K is still well below the
Tc's of the "warm" ceramic superconductors, subsequent refinements in the way MgB2 is
fabricated have paved the way for its use in industrial applications. Laboratory testing has
found MgB2 will outperform NbTi and Nb3Sn wires in high magnetic field applications
like MRI.

Though a theory to explain high-temperature superconductivity still eludes modern


science, clues occasionally appear that contribute to our understanding of the exotic
nature of this phenomenon. In 2005, for example, Superconductors.ORG discovered that
increasing the weight ratios of alternating planes within the layered perovskites can often
increase Tc significantly. This has led to the discovery of no less than 29 new high-
temperature superconductors, including a candidate for a new world record.

Researchers do agree on one thing: discovery in the field of superconductivity is as


much serendipity as it is science. Stay tuned!

Uses for Superconductors


Magnetic-levitation is an application where superconductors perform extremely well.
Transport vehicles such as trains can be made to "float" on strong superconducting
magnets, virtually eliminating friction between the train and its tracks. Not only would
conventional electromagnets waste much of the electrical energy as heat, they would
have to be physically much larger than superconducting magnets. A landmark for the
commercial use of MAGLEV technology occurred in 1990 when it gained the status of a
nationally-funded project in Japan. The Minister of Transport authorized construction of
the Yamanashi Maglev Test Line which opened on April 3, 1997. In December 2003, the
MLX01 test vehicle (shown above) attained an incredible speed of 361 mph (581 kph).

Although the technology has now been proven, the wider use of MAGLEV vehicles
has been constrained by political and environmental concerns (strong magnetic fields can
create a bio-hazard). The world's first MAGLEV train to be adopted into commercial
service, a shuttle in Birmingham, England, shut down in 1997 after operating for 11
years. A Sino-German maglev is currently operating over a 30-km course at Pudong
International Airport in Shanghai, China. The U.S. plans to put its first (non-
superconducting) Maglev train into operation on a Virginia college campus. Click this
link for a website that lists other uses for MAGLEV.

MRI of a human skull.

An area where superconductors can perform a life-saving function is in the field of


biomagnetism. Doctors need a non-invasive means of determining what's going on inside
the human body. By impinging a strong superconductor-derived magnetic field into the
body, hydrogen atoms that exist in the body's water and fat molecules are forced to accept
energy from the magnetic field. They then release this energy at a frequency that can be
detected and displayed graphically by a computer. Magnetic Resonance Imaging (MRI)
was actually discovered in the mid 1940's. But, the first MRI exam on a human being was
not performed until July 3, 1977. And, it took almost five hours to produce one image!
Today's faster computers process the data in much less time

The Korean Superconductivity Group within KRISS has carried biomagnetic


technology a step further with the development of a double-relaxation oscillation SQUID
(Superconducting QUantum Interference Device) for use in Magneto encephalography.
SQUID's are capable of sensing a change in a magnetic field over a billion times weaker
than the force that moves the needle on a compass (compass: 5e-5T, SQUID: e-14T.).
With this technology, the body can be probed to certain depths without the need for the
strong magnetic fields associated with MRI's.

Probably the one event, more than any other, that has been responsible for putting
"superconductors" into the American lexicon was the Superconducting Super-Collider
project planned for construction in Ellis county, Texas. Though Congress cancelled the
multi-billion dollar effort in 1993, the concept of such a large, high-energy collider would
never have been viable without superconductors. High-energy particle research hinges on
being able to accelerate sub-atomic particles to nearly the speed of light. Superconductor
magnets make this possible. CERN, a consortium of several European nations, is doing
something similar with its Large Hadron Collider (LHC) now under construction along
the Franco-Swiss border.

Other related web sites worth visiting include the proton-antiproton collider page at
Fermi lab. This was the first facility to use superconducting magnets. Get information on
the electron-proton collider HERA at the German lab pages of DESY (with English text).
Lastly, Brookhaven National Laboratory features a page dedicated to its RHIC heavy-ion
collider.

Electric generators made with superconducting wire are far more efficient than
conventional generators wound with copper wire. In fact, their efficiency is above 99%
and their size about half that of conventional generators. These facts make them very
lucrative ventures for power utilities. General Electric has estimated the potential
worldwide market for superconducting generators in the next decade at around $20-30
billion dollars. Late in 2002 GE Power Systems received $12.3 million in funding from
the U.S. Department of Energy to move high-temperature superconducting generator
technology toward full commercialization

Other commercial power projects in the works that employ superconductor technology
include energy storage to enhance power stability. American Superconductor Corp.
received an order from Alliant Energy in late March 2000 to install a Distributed
Superconducting Magnetic Energy Storage System (D-SMES) in Wisconsin. Just one of
these 6 D-SMES units has a power reserve of over 3 million watts, which can be
retrieved whenever there is a need to stabilize line voltage during a disturbance in the
power grid. AMSC has also installed more than 22 of its D-VAR systems to provide
instantaneous reactive power support.

The General Atomics/Intermagnetics General superconducting


Fault Current Controller, employing HTS superconductors.
Recently, power utilities have also begun to use superconductor-based transformers
and "fault limiters". The Swiss-Swedish company ABB was the first to connect a
superconducting transformer to a utility power network in March of 1997. ABB also
recently announced the development of a 6.4MVA (mega-volt-ampere) fault current
limiter - the most powerful in the world. This new generation of HTS superconducting
fault limiters is being called upon due to their ability to respond in just thousandths of a
second to limit tens of thousands of amperes of current. Advanced Ceramics Limited is
another of several companies that makes BSCCO type fault limiters. Intermagnetics
General recently completed tests on its largest (15kv class) power-utility-size fault limiter
at a Southern California Edison (SCE) substation near Norwalk, California. And, both the
US and Japan have plans to replace underground copper power cables with
superconducting BSCCO cable-in-conduit cooled with liquid nitrogen. (See photo
below.) By doing this, more current can be routed through existing cable tunnels. In one
instance 250 pounds of superconducting wire replaced 18,000 pounds of vintage copper
wire, making it over 7000% more space-efficient.

An idealized application for superconductors is to employ them in the transmission of


commercial power to cities. However, due to the high cost and impracticality of cooling
miles of superconducting wire to cryogenic temperatures, this has only happened with
short "test runs". In May of 2001 some 150,000 residents of Copenhagen, Denmark,
began receiving their electricity through HTS (high-temperature superconducting)
material. That cable was only 30 meters long, but proved adequate for testing purposes.
In the summer of 2001 Pirelli completed installation of three 400-foot HTS cables for
Detroit Edison at the Frisbie Substation capable of delivering 100 million watts of power.
This marked the first time commercial power has been delivered to customers of a US
power utility through superconducting wire. Intermagnetics General has announced that
its IGC-SuperPower subsidiary has joined with BOC and Sumitomo Electric in a $26
million project to install an underground, HTS power cable in Albany, New York, in
Niagara Mohawk Power Corporation's power grid. Sumitomo Electric's DI-BSCCO cable
was employed in the first in-grid power cable demonstration project sponsored by the
U.S. Department of Energy and New York Energy Research & Development Authority.
After connecting to the grid successfully on July 2006, the DI-BSCCO cable has been
supplying the power to approximately 70,000 households without any problems. The
long-term test will be completed in the 2007-2008 timeframe.

Hypres Superconducting Microchip,


Incorporating 6000 Josephson Junctions.

The National Science Foundation, along with NASA and DARPA and various
universities, are currently researching "petaflop" computers. A petaflop is a thousand-
trillion floating point operations per second. Today's fastest computing operations have
only reached "teraflop" speeds - trillions of operations per second. Currently the fastest is
one of the IBM Blue Gene/L computers running at 280.6 teraflops per second (with
multiple CPU's). The fastest single processor is a Lenslet optical DSP running at 8
teraflops. It has been conjectured that devices on the order of 50 nanometers in size along
with unconventional switching mechanisms, such as the Josephson junctions associated
with superconductors, will be necessary to achieve such blistering speeds. TRW
researchers (now Northrop Grumman) have quantified this further by predicting that 100
billion Josephson junctions on 4000 microprocessors will be necessary to reach 32
petabits per second. These Josephson junctions are incorporated into field-effect
transistors which then become part of the logic circuits within the processors. Recently it
was demonstrated at the Weizmann Institute in Israel that the tiny magnetic fields that
penetrate Type 2 superconductors can be used for storing and retrieving digital
information. It is, however, not a foregone conclusion that computers of the future will be
built around superconducting devices. Competing technologies, such as quantum
(DELTT) transistors, high-density molecule-scale processors , and DNA-based
processing also have the potential to achieve petaflop benchmarks.

In the electronics industry, ultra-high-performance filters are now being built. Since
superconducting wire has near zero resistance, even at high frequencies, many more filter
stages can be employed to achive a desired frequency response. This translates into an
ability to pass desired frequencies and block undesirable frequencies in high-congestion
rf (radio frequency) applications such as cellular telephone systems. ISCO International
and Superconductor Technologies are companies currently offering such filters.

Superconductors have also found widespread applications in the military. HTSC


SQUIDS are being used by the U.S. NAVY to detect mines and submarines. And,
significantly smaller motors are being built for NAVY ships using superconducting wire
and "tape". In mid-July, 2001, American Superconductor unveiled a 5000-horsepower
motor made with superconducting wire (below). An even larger 36.5MW HTS ship
propulsion motor was delivered to the U.S. Navy in late 2006

The newest application for HTS wire is in the degaussing of naval vessels. American
Superconductor has announced the development of a superconducting degaussing cable.
Degaussing of a ship's hull eliminates residual magnetic fields which might otherwise
give away a ship's presence. In addition to reduced power requirements, HTS degaussing
cable offers reduced size and weight.
The military is also looking at using superconductive tape as a means of reducing the
length of very low frequency antennas employed on submarines. Normally, the lower the
frequency, the longer an antenna must be. However, inserting a coil of wire ahead of the
antenna will make it function as if it were much longer. Unfortunately, this loading coil
also increases system losses by adding the resistance in the coil's wire. Using
superconductive materials can significantly reduce losses in this coil. The Electronic
Materials and Devices Research Group at University of Birmingham (UK) is credited
with creating the first superconducting microwave antenna. Applications engineers
suggest that superconducting carbon nanotubes might be an ideal nano-antenna for high-
gigahertz and terahertz frequencies, once a method of achieving zero "on tube" contact
resistance is perfected.

The most ignominious military use of superconductors may come with the
deployment of "E-bombs". These are devices that make use of strong, superconductor-
derived magnetic fields to create a fast, high-intensity electro-magnetic pulse (EMP) to
disable an enemy's electronic equipment. Such a device saw its first use in wartime in
March 2003 when US Forces attacked an Iraqi broadcast facility.

Lead (Pb) 7.196 K FCC


Lanthanum (La) 4.88 K HEX
Tantalum (Ta) 4.47 K BCC
Mercury (Hg) 4.15 K RHL
Tin (Sn) 3.72 K TET
Indium (In) 3.41 K TET
Palladium (Pd)* 3.3 K (see note 1)
Chromium (Cr)* 3 K (see note 1)
Thallium (Tl) 2.38 K HEX
Rhenium (Re) 1.697 K HEX
Protactinium (Pa) 1.40 K TET
Thorium (Th) 1.38 K FCC
Aluminum (Al) 1.175 K FCC
Gallium (Ga) 1.083 K ORC
Molybdenum (Mo) 0.915 K BCC
Zinc (Zn) 0.85 K HEX
Osmium (Os) 0.66 K HEX
Zirconium (Zr) 0.61 K HEX
Americium (Am) 0.60 K HEX
Cadmium (Cd) 0.517 K HEX
Ruthenium (Ru) 0.49 K HEX
Titanium (Ti) 0.40 K HEX
Uranium (U) 0.20 K ORC
Hafnium (Hf) 0.128 K HEX
Iridium (Ir) 0.1125 K FCC
Beryllium (Be) 0.023 K (SRM 768) HEX
Tungsten (W) 0.0154 K BCC
Platinum (Pt)* 0.0019 K (see note 1)
Lithium (Li) 0.0004 K BCC
Rhodium (Rh) 0.000325 K FCC

Many additional elements can be coaxed into a superconductive state with the
application of high pressure. For example, phosphorus appears to be the Type 1 element
with the highest Tc. But, it requires compression pressures of 2.5 Mbar to reach a Tc of
14-22 K. The above list is for elements at normal (ambient) atmospheric pressure. See the
periodic table below for all known elemental superconductors (including Niobium,
Technetium and Vanadium which are technically Type 2).
**Note 2: Normally bulk carbon (amorphous, diamond, graphite, white) will not
superconduct at any temperature. However, a Tc of 15K has been reported for elemental
carbon when the atoms are configured as highly-aligned, single-walled nanotubes. And
non-aligned, multi-walled nanotubes have shown superconductivity near 12K. Since the
penetration depth is much larger than the coherence length, nanotubes would be
characterized as "Type 2" superconductors.

Author's Comment: The information posted on this page was obtained from a variety of
sources including, but not limited to, the CRC Handbook of Chemistry and Physics, the
Technische Universität München, Reade Metals and Minerals Corp., industry news
sources, and various private researchers. A special thanks to Professor Bertil Sundqvist,
Department of Experimental Physics, Umea University, Sweden, also to Dr. Jeffery
Tallon, Industrial Research Ltd., New Zealand, and to Dr. James S. Schilling, Department
of Physics, Washington University.
Atypical Superconductors
and the Future

As if ceramic superconductors were not strange enough, even more mysterious


superconducting systems have been discovered. One is based on compounds centered
around the "Fullerene". The fullerene name comes from the late designer-author
Buckminster Fuller. Fuller was the inventor of the geodesic dome, a structure with a
soccer ball shape. The fullerene - also called a buckminsterfullerene or "buckyball" -
exists on a molecular level when 60 carbon atoms join in a closed sphere. When doped
with one or more alkali metals the fullerene becomes a "fulleride" and has produced Tc's
ranging from 8 K for Na2Rb0.5Cs0.5C60 up to 40 K for Cs3C60. In 1993 researchers at the
State University of New York at Buffalo reported Tc's between 60 K and 70 K for C-60
doped with the interhalogen compound ICl.

Fullerenes, like ceramic superconductors, are a fairly recent discovery. In 1985,


professors Robert F. Curl, Jr. and Richard E. Smalley of Rice University in Houston and
Professor Sir Harold W. Kroto of the University of Sussex in Brighton, England,
accidentally stumbled upon them. The discovery of superconducting alkali metal
fullerides came in 1991 when Robert Haddon and Bell Labs announced that K 3C60 had
been found to superconduct at 18 K.

Larger, non-spherical pure carbon fullerenes that will superconduct have only
recently been discovered. In April of 2001, Chinese researchers at Hong Kong University
found 1-dimensional superconductivity in single-walled carbon nanotubes at around 15
Kelvin. And in February 2006, Physicists in Japan showed non-aligned, multi-walled
carbon nanotubes were superconductive at temperatures as high as 12 K. Silicon-based
fullerides like Na2Ba6Si46 will also superconduct. However, they are structured as infinite
networks, rather than discrete molecules.

Você também pode gostar