Você está na página 1de 82

© J.L.

Heilbron
March 2011

Notes on
Physical Science Around 1900

The following pages comprise (1) a general introduction to the issues and institutions of physical
science from around 1870 to 1914 (2) an account of the leading experimental subject in physics
centering on the investigation of cathode rays (3) an outline of the theory of atomic structure up to
Bohr’s first contributions to it, and (4) an indication of the applications of physics to World War I.

The point of departure, 1870, marks the beginning of construction of institutes for scientific research
in the universities and higher schools of Europe and the completion of powerful syntheses of large
domains of physical science previously treated as distinct. The statistical kinetic theory of gases, the
electromagnetic theory of light, the table of the chemical elements, and molecules in three dimensions
(stereochemistry) all came into existence within a decade of 1870. We may also date to around 1870
the rise of the first large-scale industries based on or developed by academic science: the organic dye
industry (the sites of the first industrial research laboratories), worldwide telegraphy (the incubator of
electric light and power), and steel manufacture (a prime stimulant to the metallurgical sciences).

I. General Background

1. The World

The year 1870 marked a watershed in political history down which science and technology
accelerated worldwide during the first half of the ensuing century. Beginning with the Meiji coup of
1868, which broke the feudal power of the shoguns, Japan rushed to domesticate Western science and
its products: it brought in foreign experts, learned Western languages, and set up universities (the
first, in Tokyo, in 1877) and many technical schools and colleges. By the early 20th century it was
producing a few world-class scientists and had achieved sufficient economic and military power to
defeat the Russian Empire in war. An even greater power crystallizing around 1870 was the then
truly, if forcibly, re-United States, whose disastrous Civil War, concluded in 1865, gave way to large-
scale modernization. It may be symbolized by the 40-foot, 700-ton Corliss steam engine that powered
acres of machinery at the exposition with which the country celebrated, in 1876, the centennial of its
founding. The conversion of the United States into an industrial nation between 1870 and 1900
created private wealth that paid for universities emphasizing modern subjects, observatories, libraries,
and, just after 1900, the best endowed foundation for physical research in the world, the Carnegie
Institution of Washington.

In Europe, 1870/1 saw the unification of Germany, which integrated economies, removed trade
barriers, enhanced competition among the state-based universities, and promoted centralized
encouragement of science-based technologies. By 1900 Germany led the world in the manufacture of
scientific instruments, optical glass and other high-tech industries, scientific publishing, and the
application of science to the uses of government and the military. The appearance of the Kaiser,
dressed in the uniform of an officer of engineers at the Technische Hochschule Berlin in 1899 to grant
German higher technical schools the right of awarding doctoral degrees, may be taken as a symbol of
the German integration of science, technology, and the state. It may also be taken as a measure of the
relative appeal of the Kaiser and the academy; before his speech, the ratio of enrollments at
universities and polytechnics stood at 3 to 1; within a year or two it had risen to 4 to 1.

Germany's formal unification followed immediately on its defeat of France. In its humiliation, France
strove to strengthen itself where it felt inferior to its adversary -- science and technology. During the
1870s, France sent missions to study German methods and institutions; in consequence, the French
overhauled their system of higher education, gave greater autonomy to provincial universities, and
rebuilt Parisian institutions. As a result, by 1900 the French were on a par with the Germans in per
capita investment in several sciences and technologies. Although it did not bear much fruit in physics
or chemistry during the period 1870-1920, the unification of Italy and the reduction of the Papal
States to the Vatican City, which also took place in 1870/1, prepared the way for the rise of Italy as an
effective power in physical science between the world wars.

The period 1870-1913 saw the rapid industrialization not only of the United States but also of the
major European powers. An indication of the rapidity of the so-called second industrial revolution of
the later 19th century in Europe is the pace at which the dominant countries (Britain, France,
Germany) approached their outputs on the eve of World War I. Table I rates the rush.

Table I
Index of industrial production
achieved at the indicated datesa
1860 1870 1880 1890 1900 1910 1913
Britain 34 44 54 65 76 85 100
France 26 31 38 49 64 89 100
Germany 14 16 21 37 60 86 100
U.S. 8 12 21 36 55 84 100

2
a. Taylor, Struggle (1971), xxxi.

Roughly speaking, as late as the 1870s the industrial development in the European countries over all
previous time amounted to only one-fifth to one-half of what they would attain in the next 50 years.
It appears from the table that Britain had reached a steady state by 1890 while the rates of growth of
France, Germany and the United States rose sharply.

Another figure of merit in the industrial competition of the years around 1900 is production of steel,
first mass-produced in the 1860s.

Table II
Steel production, 1870-1914
(in millions of tons)a
1870 1880 1890 1900 1910 1914
Britain 0.7 1.3 3.6 5.0 5.9 6.5
France 0.3 0.4 0.7 1.6 3.4 3.5
Germany 0.3 0.7 2.3 6.7 13.8 14
U.S. 1.3 4.3 10 26 32
a. Taylor, Struggle (1971), xxx.

As appears from Table II, Britain's total in 1870, 700 kilotons, exceeded the sum of the production of
its competitors. In 1880 Britain and the United States came equal and the total of the big four was
almost three times what it had been in 1870. In 1890 the United States alone produced more than the
big four's production of 1880 and in 1910 its output exceeded that of the three European powers
combined.

Among the largest consumers of steel were steamships and railways. Lloyd's Registry included no
steel vessels in 1870 and under 400 kt (kt = 1000 short tons) of iron vessels, and the world's
commercial fleet had seven times as much tonnage under sail as under steam. In 1906 the Registry
had no iron ships and 1492 kt of steel vessels, and the Mauritania and the ill-fated Lusitania, each
with a 32,000-ton displacement (over twice that of the market leader of 1900, the Deutschland) and a
maximum speed of 24 knots, were coming off the ways. The ratio of steam to sail had switched to
2:1, a flip of 14-fold. On land, the world's railroad track almost doubled during the 1860s, from 108
kkm to 210 kkm; around 1885 it equaled the distance from the earth to the moon (384 kkm); and it
had doubled again, to 790 kkm, by 1900. By 1910 the track had reached over a million kilometers, a
five-fold enlargement since 1870. About three-quarters of the U.S. output of steel in the 1870s and
1880s went for railroad tracks.

3
The trains and steamships enabled scientists to visit one another's laboratories and attend international
meetings in numbers, and with a frequency, inconceivable in 1870. In 1904 a delegation of leading
European savants traveled to the United States to give lectures at the International Exposition held in
Saint Louis, Missouri; and in 1914 the British Association for the Advancement of Science held its
annual meeting in Australia, which drew several German scientists as well as many participants from
the United Kingdom. These academic nomads perfectly modeled the lucky citizens of H.G. Wells'
Modern Utopia (1905), in which fast and efficient transportation provided the exchange of
information, personal freedom, and ready access to employment that secured the prosperity of its
world state.

A less benign consumer of steel was the military. Spending on steel ships and armaments, electrical
communications, and other high-tech measures of "defense" by the great powers before the Great War
helped to drive a furious arms race (Table III).

Table III
Defense Estimates of the Great Powers
(millions of pounds sterling)a
1870 1880 1890 1900 1910 1914
Britain 23.4 25.2 31.4 116 68 76.8
France 22.0 31.4 37.4 42.4 52.4 57.4
Germany 10.8 20.4 28.8 41 64 110.8
a. Taylor, Struggle (1971), xxviii.

The decline of France in comparison to Germany after 1900 leaps to the eye. The spike in British
expenditures in 1900 reflects the Boer War. By 1914 Britain's per capita spending on the military
came exactly even with Germany's, £ 1.7; France was not far behind, at £ 1.5. These figures were far
in excess of expenditures on academic science. They represented between 3% and 4% of GNP in the
years around 1900, whereas academic physics (about 70% of the physics profession) then consumed
about 0.005% of GNP in Britain, France, Germany, and the United States. Competition among the
nations in military preparedness, as well as in trade and commerce, apparently had produced similar
demands for physics in the major powers at the turn of the century, although on a scale 1000 times
smaller.

The first manufacturers of high-tech products to perceive that academically trained scientists brought
profits grew rich. Thus Andrew Carnegie (1835-1919), writing in the late 1870s: "We found...a
learned German, Dr Fricke, and great secrets did he open up to us....Nine-tenths of all the
uncertainties of pig-iron making were dispelled under the burning sun of chemical knowledge....Years

4
after we had taken chemistry to guide us, [our competitors] said they could not afford to employ a
chemist. Had they known the truth then, they would have known they could not have afforded to do
without one." Other calls for men of science went out from industries that did not exist before 1870:
organic dyes and pharmaceuticals, rubber and petroleum, electric light and power.

The newly awakened source of power, electricity, fused a multidisciplinary workforce. The total
available electrical supply in Germany increased by 1500%, of Britain by 1000%, of the United
States by 400% between 1900 and the war; the Americans, led and fed by Niagara Falls, generated
the largest quantity, though not the greatest rate of change:

Table IV
Use of Electricity (109 kWh)a
1895 1900 1905 1910 1913
Britain 0.38 .180 .645 1.27 1.97
Germany .036 .252 .655 2.18 3.73
U.S. 4.77 (1902) 17.6 (1912)
a. Hannah, Electricity before nationalisation (1979), 427-8; Zängl, Deutschlands Strom (1989), 47;
Bauer et al., Electric power (1939), 4, 7

Much of the new power, especially in the United States, went to electrochemical industries: cryogenic
separation of gases (notably oxygen and hydrogen), electrolytic separation of metals (notably copper
and aluminum), heat reduction of oxides of inaccessible metals (chromium, manganese, tungsten,
molybdenum), processes for making fertilizers (nitrates, phosphates), alkalis, and so on. The design
and operation of most of these processes required the cooperation of chemists, physical chemists, and
industrial engineers. An indication of the size and keenness of this interdisciplinary body was the
formation of the American Electrochemical Society in 1902, with over 550 members drawn from
higher education and industry. Contemporaries remarked the "amazing rapidity" of this
"crystallization." The wise governors of Wells' Utopia placed their largest and best-equipped
laboratories not in universities but adjacent to power stations and major industries.

To meet the demand for men trained in the physical sciences to the bachelor's level, universities
expanded their curricula and a host of technical schools sprang up, the Technische Hochschulen of
German-speaking countries and their counterparts in Britain (the university colleges and universities
in manufacturing towns) and France (the Ecole municipale de physique et chimie industrielles and
provincial faculties of science and technology). Stronger demand for university training in physical
science also came from the medical profession, which expected doctors to know something about
electricity and chemistry, and from the education ministries, which saw a need for secondary school
teachers able to prepare students for further training in practical arts and sciences.

5
The number of university students in the four major powers increased rapidly during the first decade
of the 20th century. The total British student-body went up by 20%, to 2500; the German by 60%, to
60,000; the American by over 80%, to 187,000. Since the total enrollment of the Technische
Hochschulen remained constant at one-fourth that of the universities, it too increased by 60%, to
around 16,000, in 1910. Enrollments in science and medicine grew faster than total enrollments.
Over the whole period, 1870-1910, the student-bodies in German universities increased by a factor of
4.6 (to 60,000); in medicine, by a factor of 6 (to 18,000); and in the natural sciences faster still, by an
order of magnitude, to 7500. Medical students usually made up over half the elementary courses in
physics and chemistry in Germany around 1900. A similar situation held in France, at least after the
introduction of the P.C.N. course (Sciences physiques, chimiques et naturelles) in 1893, and probably
also in Britain, as it did at the Cavendish Laboratory of Cambridge University. Few of these many
students became professional physicists or chemists, and very few of these, in physics under one
percent, went into research. The pressure of the many, however, contributed importantly to the output
of the sciences since the growing academic teaching staff produced most of the published research in
physics and much of that in chemistry around 1910.

2. The Profession

Demography

Reliable, systematic, and comparative figures for the size of the profession of physical scientist exist
for physics in the main European countries and the United States for 1900, and for chemistry for the
United States only. Since before World War I physicists worked on many subjects that now fall to
other disciplines, conclusions drawn from their demography apply more widely. One out of every six
invited papers delivered at the International Congress of Physics held in Paris in 1900 concerned
biophysics or geophysics. German bibliographies of research in physics during the early years of the
20th century routinely covered physical chemistry, geophysics, meteorology, and cosmic physics,
which taken together made up half of the world's literature on "physics" published in 1910. An
Adressbuch der lebenden Physiker published in 1909 lists 3170 "physicists," 592 of whom gave
astronomy as their subject, 284 geophysics or meteorology, 256 technology, and 214 physical
chemistry. Most of the rest of chemistry diverged from the pattern for physics in proportion to its
longer and closer ties to industry.

Seven hundred physicists – about one in every four of the 2800 persons who taught or used physics in
Europe in 1900 -- registered at the Paris Congress. Three of every five of these were teachers, one an
engineer, and one a government or a military man. The teachers, particularly those employed at a

6
higher school or university, dominated the meeting out of all proportion to their numbers. Although
only forty percent of the assembly, they did 90 percent of the talking. They spoke more because they
had more to say, being the authors of over 70 percent of the research reports then published in the
leading scientific journals, and because academics outranked practical men.

The population of higher academic physicists, some 800 in Europe and 150 more in the United States,
was growing at about 3 percent a year at the time of the Paris Congress. This figure, pieced together
by counting institution by institution, agrees well with the number of physicists (971) teaching in
universities and higher schools as given in the Adressbuch of 1909. A corresponding estimate can be
made for chemists from the datum that in the U.S. in 1900 there were perhaps 500 academic chemists,
about 2.5 times the number of academic physicists. If the ratio held for other major chemistry-
producing countries, the world had almost 2500 chemists teaching in its higher schools and colleges
around 1900. Also in 1900, academics made up a little over a fifth of the American Chemical
Society. If the ratio 1:4 of academics to all others held worldwide, there would have been 12,500
chemists in Europe and in the United States in 1900, a little under five times the number of physicists.
This figure cannot be far off, since the combined memberships in the chemical societies of Britain,
France, Germany, and the United States in 1910/11 was 12,500. Allowing for the facts that the body
of chemists grew between 1900 and 1910, that the chemical societies did not enroll all chemists, and
that the big four had over 9000 chemists in 1900, we arrive again at a number around 12,000 for the
world's supply of chemists at the fin-de-siècle.

Although Germany had over twice as many chemists as Britain in 1900 (over 4000 against under
2000), the British chemical industry was then probably still the world's largest. The solution to this
apparent paradox is that Germany dominated in fine chemicals, in dyes, drugs, and perfumes,
industries that required a well educated, plentiful, and innovative work force. Spokesmen for British
chemistry constantly complained about the relatively poor quality of the "trained" British chemist and
the considerably higher ratio of teachers to students of chemistry in Germany. Efforts to correct the
imbalance did not succeed and by World War I Britain was largely dependent on Germany for its
supply of chemical products. So was France, despite its lead in electrochemistry around 1900, owing
largely to the work of Henri Moissan (1852-1907); but as the Jury of the International Exposition held
in Paris that year remarked, the French lacked the manpower and infrastructure to maintain their
position.

The figures obscure an important novelty in the population of physical scientists around 1900. This
was the slowly growing but increasingly important subset of academics specializing in theory. They
occupied chairs primarily in Germany and countries that adopted the organization of German

7
universities, and also in Britain and the British Empire, where graduates of the Cambridge regime in
mathematics held about half the permanent positions in physics in 1900. The incumbents of the few
chairs in France for physique mathématique tended to identify with mathematics, although, as in the
case of Henri Poincaré (1854-1912), they might pay attention to the epistemological as well as to the
mathematical basis of physical science. In the United States, pragmatic as usual, the theorist scarcely
existed.

The thoughts of the theorists did not turn entirely to physics even when they held a post in the subject.
For example, Hermann von Helmholtz (1821-94, Berlin), Max Planck (1858-1947, Berlin), and J.J.
Thomson (1856-1940, Cambridge), to name three of the most prominent "theoretical physicists" of
the decades on ether side of 1900, gave sustained attention to chemical thermodynamics, chemical
combination, and the problematic ionic theory of solutions; often their subject matter overlapped with
that of Wilhelm Ostwald (1853-1922, Leipzig), Svante Arrhenius (1859-1927, Stockholm), and J.H.
van't Hoff (1852-1911, Berlin), the protagonists and theorists of physical chemistry. Their protégé
Walter Nernst (1864-1941) began as a junior professor of physics at Göttingen, upgraded in 1894 to
professor of physical chemistry, and left in 1904 for a similar position and a new institute at Berlin,
where he contributed importantly to quantum theory; the new doctor Albert Einstein (1879-1955),
trying desperately to put a foot on the academic ladder, sought a place, vainly as it turned out, as an
assistant to the physical chemist Ostwald.

Counts of theorists active in 1900 are not very reliable: not only did theorists occupy posts for
generalists but experimentalists for whom no other place could be found sometimes sat in chairs for
theorists. Perhaps the most useful indicator is that 15% of all academic positions in German physics
around 1900 were intended for, if not occupied, by theorists. The modest total, 16 in all, agrees
perfectly with the declared activities of German physicists from the Adressbuch of 1909. British
professors of physics trained in mathematics at Cambridge were not theorists in the German sense;
few of them tried to make grand, quantified world systems in which to fit experimental data.
Nonetheless, their ability, honed by their course of study, to devise and calculate the behavior of
mechanical models gave English physics a distinctive orientation that proved astonishingly useful in
the first explorations of the microworld.

French physical science also had a distinctive character, imbued with positivism and instilled at the
Ecole polytechnique and Ecole normale in Paris, through which over half the academic physicists and
chemists active in France in 1900 passed. No such center existed in Germany, where students
customarily attended more than one university or polytechnic, or in the United States, where many
professors, particularly in chemistry, had done advanced research in various parts of decentralized

8
Germany. American domestic Ph.D.'s came from a variety of universities, mainly Johns Hopkins,
Cornell, Harvard, Columbia, and Chicago; if they had a particular orientation, it derived from the
enjoyment of greater space and freedom, and more and larger equipment, than most of their European
counterparts possessed. The quality of these home-grown doctors and their facilities did not escape
the attention of the Prussian Ministry of Education, which in 1910/11 placed a dozen American
universities on a par with German ones.

Resources

Most informed observers ranked Germany first among the science-producing nations at the turn of the
century. Next came Britain and France, in that order, and, last among the big four, the United States.
The impression of German predominance came not only, or perhaps (except for chemistry) primarily,
from the quality of the product. The world's most accomplished measurer, Albert A. Michelson
(1853-1931), was an American; no one controlled the mathematical methods of physics better than
Cambridge graduates; for elegance of expression, the French had no equal. Germany appeared to
dominate physics and chemistry because of the quality and solidity of German-language publications,
the preponderance of German firms in the manufacture of scientific instruments, the relatively
generous support of science institutes by several states of the Reich, and the prestige enjoyed there by
professors.

The most conspicuous apparent advantage of German science was the acceptance by the several state
governments of the need to support it. Prussia (whose leading university was Berlin), Saxony (Leipzig),
Bavaria (Munich), and Hanover (Göttingen) were relatively generous in making capital grants for expensive
apparatus and new plant. That peculiar German institution, the call and its appurtenant negotiations, helped to
inspire generosity. A professor in one state called to serve another would dicker with both over salary,
assistance, budget, furnishings, and buildings; a rising man of distinction, like a Helmholtz, Ostwald, or
Nernst might leave behind a string of renovated laboratories and even institutes. Decentralization was the
driving principle, and the guarantee that in Germany the best scientists would not, as in France, be crushed
into the apex of a single educational system.

In France, all significant universities owed their financing for letters and science almost entirely to the central
government. One did not dicker over calls: everyone's ambition was an appointment in Paris and, once there,
the practice of cumul, or the accumulation of posts. Henri Becquerel (1852-1908), by no means an
overachiever, held three professorships in Paris, one won on his own and two passed down as if private
property from his father, who had had one of them from his father. Dynastic intermarriage among professorial
families further restricted access to the top; and the generation of the Curies, who had few blood ties with that

9
of Becquerel, found it convenient to marry among themselves. The centralization of support in a government
increasingly pressed for money, the attendant cumul, and intermarriage help account for the relatively poor
showing of French physics and chemistry during the first decade of the 20th century.

In the British Empire little, and in the United States no support for academic physical science came from the
central government. Municipalities contributed in the first case and states in the second; but a large source for
funds for expansion in both came from individuals or corporations. Spokesmen in each country, particularly
in Britain, compared this uncertain funding unfavorably with the generosity of the German states. This
perception probably did not correspond to reality in chemistry around 1900 and it certainly erred in physics,
whose material base then was expanding more rapidly in the Anglo-Saxon countries than in their principal
rival. The leading institutes of the British Empire and the United States had all been built and furnished by
private philanthropy: Cambridge, London, Manchester, McGill, Johns Hopkins, Harvard, Columbia, Cornell,
Chicago. German statesmen of science pointed to this generosity as something to fear and emulate.

Americans were already the biggest spenders on salaries, equipment, and new plant for academic science. The
total for physics was 1.5 times that of the British Empire, twice that of Germany, thrice that of France. And
the disparity was widening, American expenditures growing at 10 percent a year, British and German at
around 5 percent, French at perhaps 2 percent. (The 12 American universities ranked as equal to German ones
by Prussia in 1910/11 had operating budgets over three times as great on average as their Prussian
counterparts.) As observed earlier, however, expenditures for academic physics in the four nations were
about equal around 1900 when reckoned as a fraction of national income; and since the fraction of the
population of each country engaged in academic physics (about three per million) was also about the same,
the investment per academic physicist in each of the four major powers in science had equalized at the turn of
the century despite the diversity in funding and recruitment. But the equality was fleeting. The United States
pulled ahead, Britain and Germany maintained their relative positions, and France slipped behind. In 1903
the operating expenses of its leading producer of physicists, the Ecole Normale Supérieure, began to fall.

These investments did not bring proportionate returns. Germany contributed almost one-third of the papers
published in the core physics journals in 1900, and its professors were three times as productive (judging by
number of publications) as the worst performing group, the Americans. In chemistry German dominance was
even more evident owing to the number and competence of its chemistry Ph.D.s and the quality of its journals
and reference works. Between 1900 and the war 50% of citations to papers in American chemical journals
were to German publications. The Société chimique of Paris devoted over 70% of its pages to reporting work
done across the Rhine.

The newly perceived usefulness of academic science constituted the main force driving per capita investment

10
toward equality. Spokesmen frequently resorted to the old strategy of exaggerating advances in other
countries to improve the situation of science at home. During the decade before World War I the effects of
commercial, military, and rhetorical competition became ever more conspicuous in manpower and materiel.
Britain and the United States, embarrassed to have to send their products to Germany's Physikalisch-
Technische Reichsanstalt (PTR, founded in 1887) for certification, established standards laboratories of their
own. The National Physical Laboratory (1900), directed by a Cavendish man, R.T. Glazebrook (1854-1935),
had a physics department of 24 in 1905, of whom about a dozen had degrees; the number reached 63 in 1914.
The National Bureau of Standards in Washington (1901), directed by S.W. Stratton (1861-1913), formerly a
professor of engineering and physics, started with three in physics, two with Ph.D.s, and, by 1910, had 31
physicists, a chemical department, and a budget twice that of the PTR. Government grants, testing fees, and,
what was especially important for the National Physical Laboratory, gifts from individuals and firms, brought
the three testing bureaus an operating income and fund for instruments greater than amounts for comparable
purposes available in universities.

The industries producing the products for testing had set up research laboratories of their own by 1913. The
General Electric Research Laboratory, established in 1900 with a staff of two or three, numbered perhaps 200
in 1913 including the former academic physicists Irving Langmuir (1881-1957), who had obtained his Ph.D. in
1906 under Nernst, and W.D. Coolidge (1873-1975), who also had a German Ph.D. and a period at MIT under
Arthur A. Noyes (1866-1936), an ionist trained by Ostwald, A little behind General Electric in date of
foundation and liberality of policy came laboratories at American Telephone and Telegraph (1907), Corning
Glass (1908), Eastman Kodak (1912), Philips Eindhoven (1914), and Siemens & Halske (begun before 1900,
greatly enlarged beginning 1913). In 1909 some 200 people worldwide described themselves as industrial
physicists or electro-technologists, and another 50 said they were otherwise engaged in engineering physics.
And the physicists trailed the chemists, who began to set up industrial research laboratories from around 1870
to improve products for the very competitive market in organic dyes. By 1900 the industry leaders, AGFA,
Bayer, BASF, Hoechst, and Oehler, all German firms, each maintained a laboratory with more than a hundred
chemists. About 3000 chemists worked in German chemical industries in 1913, and the number of new hires
reached 400 a year.

Three other institutional forms created with government and/or industrial money in the early 20th century
affected the tone and pace of physical science. The Nobel prizes, endowed with the proceeds of dynamite and
smokeless powder, were intended by their founder to reward those "who, during the preceding year, shall
have conferred the greatest benefit on mankind." As prizewinning work in physics and chemistry Nobel had
in mind discoveries and inventions of the sort he had made. Although the first prize in physics, to William
Conrad Röntgen (1845-1923) for x rays in 1901, and several of the early prizes in chemistry, met this test, the
professors soon conquered the system and rewards went more and more to academic work of no immediate

11
practical value. Thus the fortune of the industrialist unintentionally came to support the most esteemed and
valuable prize for achievement in basic science.

Proceeds from Andrew Carnegie's endowment of $10 million in 1901 for a research institute "to conduct,
endow, and assist investigation in any department of science, literature, or art, and to this end to cooperate
with governments, universities, colleges, and technical schools, learned societies, and individuals," went
mostly to terrestrial magnetism, geophysics, and solar physics. Physicists occupied the highest positions in
the Carnegie Institution: the president from 1904, R.S. Woodward (1849-1924), came from a professorship in
mechanics and mathematical physics at Columbia; the head of terrestrial magnetism had a degree in physics
from the University of Berlin; the head of geophysics, a Yale Ph.D., trained at the PTR; the head of the solar
observatory at Mount Wilson, George Ellery Hale (1868-1938), an entrepreneurial astronomer frequently
nominated for the Nobel prize in physics, became the major force in mobilizing American science during
World War I.

The size of the Carnegie gift staggered contemporaries. James Dewar (1842-1923), professor of natural
philosophy at Cambridge and of chemistry at the Royal Institution of London, calculated that the interest on
the gift for one year exceeded the total expenditure of the Institution's laboratory, in which Michael Faraday
(1791-1867) had made his physical-chemical discoveries, for a century. Concerned German planners, desiring
to inspire similar generosity among the Reich's industrialists, proposed the creation of the Kaiser-Wilhem-
Gesellschaft (Kaiser-Wilhelm Society) to assist in financing "pure" research institutes. The Society would
support gifted investigators to pursue truth armed with the most powerful weapons of the research front and
far from the battle of the classroom. The Society's projectors, starting from a premise that reads oddly
today—"Science has reached a point in its scope and thrust that the state alone can no longer care for its
needs"—pointed to the Carnegie Institution, to the laboratory of the chemist T.W. Richards (1868-1928) at
Harvard (supported by the Carnegie Institution and devoted to the determination of atomic weights), and to
Arrhenius' Nobel institute for physical chemistry, as evidence that Germany was falling behind in the
mobilization of resources for science and in setting the "pure scientific basis" of technological progress. The
announcement in 1909 of an endowment of over four million dollars to Columbia University, and of a gift of
a million dollars to the University of Chicago for a physics institute, helped the German plan along. The
Kaiser-Wilhelm-Gesellschaft incorporated in January 1911, with a pledged capital of around 10 million
marks, a fourth of the endowment of the Carnegie Institution.

The first plans for what became the Kaiser-Wilhelm institutes (KWIs), drawn up in 1908/9, gave as the most
urgent need of German science a center for research on radioactivity and electron physics. This choice, made
with the help of Nernst and the Berlin Academy of Sciences, corresponded perfectly with the most pressing
problems in physics then susceptible to direct experimental investigation. However, the first three institutes

12
built for physical sciences all went to chemistry. They were a collective substitute for a large-scale chemical
analogue to the PTR. The chemists could not acquire the funding they required or their analogue; the Kaiser-
Wilhelm-Gesellschaft moderated their project and made its realization its first priority. The KWI für Chemie
preserved a relic of the original plan in a division for radioactivity under the direction of Otto Hahn (1879-
1968); the KWI für Chemie und Elektrochemie devoted its resources to the mixture of basic and applied work
in physical chemistry cultivated by its director Fritz Haber (1868-1934). Both were chartered in 1911 and
opened in 1912. The capital cost of the buildings, around a million marks each, fell a little under that of an
up-to-date physics institute such as Leipzig's, which cost 1.3 million in 1904. A third institute, the KWI für
Kohlenforschung, which investigated hydrocarbons as fuels and raw materials (chartered 1912, opened 1914),
was the most applied of all.

Neither France nor Britain achieved the integration of basic and applied chemistry represented by the first
KWIs and the Chemische Gesellschaft of Berlin, whose 3100 members in 1900 came indifferently from
academic and industrial, basic and applied, backgrounds. In contrast, the Société chimique of Paris, which had
around 1100 members in 1900, the great majority of them teachers, had little to do with practical chemistry.
Perhaps in consequence, its conference on chemistry at the Universal Exposition of 1900 drew few
participants, whereas the same year the Association des chimstes de la sucrerie et de la distillerie attracted
more than 1800 participants to its International Congress of Applied Chemistry. In Britain, although many
students (some 2400 of them in 1895) took some sort of examination in chemistry, very few submitted to
advanced training and few of these had much to do with industry. Everyone who earned a bachelor's degree
and doctorate in chemistry at University College London (a major center) between 1888 and 1901 went into
teaching; the corresponding figure for 1902-13 is 53 percent. For those who stopped with a bachelor's degree,
the figures are 33 percent and 20 percent, the latter agreeing with the careers of graduates of the relevant
Cambridge course (Part II of the Natural Sciences Tripos); but very few who had industrial jobs did much
more than routine analytic work.

Organization

A reason for lumping physicists and chemists into the same profession is that, by 1900, they faced similar
career patterns and choices. They held the same sorts of jobs in teaching, research, industry, and government
service, although with a different distribution owing to the greater opportunities for chemists in applied work.
External recognition came from publication in core journals and in the reports and transactions of the main
professional societies and scientific academies. Among the societies the Deutsche Physikalische Gesellschaft,
the Deutsche Chemische Gesellschaft, the Physical Society of London, the Chemical Society of London, and
the Institute of Electrical Engineers were perhaps the most prestigious publishers. Among the national
academies, the French (Académie des sciences, Paris), the British (Royal Society, London), and, far behind in

13
status, the American (National Academy of Sciences, Washington) perforce outdid the Germans, since
Germany had (and has) no national academy; among regional academies, those of Berlin (Prussia), Göttingen
(Hanover), Leipzig (Saxony), Munich (Bavaria), and also of Cambridge (England) were the most important
outlets. In addition, patent disclosure could promote the careers of industrial scientists and general or
disciplinary news journals like Nature (founded 1871), La nature (1873), Physikalische Zeitschrift (1899),
and Chemical News (1859) disseminated both research news and good tidings.

The academies promoted careers by awarding prizes and conferring memberships. The Paris Academy was
foremost in the award of prizes (several hundred a year) and the restriction of membership (under 100
exclusive of correspondents). In keeping with the centralization and intermarriage of the French
professoriate, and the overwhelming dominance of the academy by professors, many members of the select
Académie were related to one another by blood or marriage. The Royal Society was larger, more democratic,
and more open to people from outside academic life; it had a tradition of admitting engineers and other men
of action. The U.S. National Academy of Sciences, wishing to make clear to the public the difference
between science and technology, refused to admit the greatest inventor of the age, Thomas Edison (1897-
1931). The regional academies of Germany offered the prestige of membership to representatives of most
respectable scholarly disciplines; they did not confine themselves to the natural sciences as did the national
institutions in Paris, London, and Washington. But they kept clear of engineers or technologists who did not
also have strong scholarly credentials; as late as the 1920s Planck, by then a long-standing secretary of the
Berlin Academy, successfully opposed the establishment of a section for applied science. In his scheme of
values, history was closer to natural science than engineering.

Above the local level (universities, polytechnics, government and industrial laboratories) and the
regional/national level (professional societies, academies), stood, in the scale of inclusiveness, the national
associations for the advancement of science and international organizations of various degrees of
specialization and permanence. The national associations collected scientists, supporters of science, and the
interested public in large annual meetings that changed venue from year to year. Physicists and chemists
considered it a great honor to be invited to address plenary sessions at these meetings and to report on the
latest work in their sub-disciplines. Especially around 1900, when the landscape of physical science changed
quickly, the meetings of the national associations of Britain (British Association for the Advancement of
Science) and Germany (Gesellschaft deutscher Naturforscher und Arzte) were well attended and influential.
Both often dew participants from abroad.

Internationalism was as distinctive a feature of the period 1870-1914 as were the nationalisms it tried to
ameliorate. The most instructive index may be the frequency of international congresses and conferences,
encouraged not only by cosmopolitan or humanitarian promptings, but also by the ever enlarging and

14
increasingly efficient communication and transportation system of Europe. The need to coordinate and even
regulate this system inspired many international meetings. The raw statistics appear in Table V.

Table V
International congresses and conferences
held per decadea
1870-79 1880-89 1890-99 1900-09
Earth sciencesb 12 16 18 29
Physicsc 5 6 4 26
Chemistryd 5 20 38
e
Transp/Comm 4 15 10 27
Sum 21 42 52 120
All meetings 168 311 510 1062
Sum/All 0.125 0.135 0.102 0.113
a. Annuaire de la vie internationale, vols. 1-2 (1908-09, 1910-11).
b. Geodesy, geography, meteorology, climatology, oceanography.
c. Includes weights and measures, applied electricity.
d. Includes mining, metallurgy, photography
e. Telegraphy, railroads, aeronautics

These roughly sorted data indicate that international meetings pertaining to physical science and its
applications increased at the same pace as international meetings as a whole, holding constant at something
over 10%. They held their importance relative to the diplomatic, commercial, and legal concerns that
prompted the bulk of international meetings, but their frequency does not suggest that scientists were more
internationally minded than businessmen. Indeed, the rate of foundation of permanent international
organizations points to a shift in time towards more general social sciences. During the period 1870-89, the
categories surveyed in Table V accounted for almost 40% of new permanent organizations; during the next 20
years, for 20%.

The largest number of international congresses and conferences in science dealt with subjects related to
standards (in procedures and units) or to phenomena best approached worldwide (the earth sciences). Such
subjects fell under the heading "unconscious nationalism" according to the distinction proposed by Alfred
Fried (1864-1921), head of the Office central des institutions internationales (Brussels). According to Fried,
rapid progress in transportation, communications, and technique inexorably shrinks the world, brings people
together, enforces cooperation; "all science unconsciously works in an international mode." Conscious
internationalism marked the organizations set up to promote one or another subject field by establishing a
permanent bureau and holding regular meetings.

15
Among the most consciously international of these organizations were associations of national bodies. The
International Association of Academies, founded in 1899 on the initiative of the Berlin Academy and the
Royal Society, was perhaps the first non-governmental umbrella organization. Its purposes, to promote
"science without frontiers" and to carve a space for academies in the crowded world of scientific
organizations, were realized largely in bibliographical projects, standardization of nomenclature, and large-
scale geographical surveys. An Association internationale des sociétés chimiques was founded in 1911 on a
proposal by Ostwald, who offered his house as its headquarters, his books as its library, and his Nobel prize as
its dowry. A similar association for physical societies was suggested by Charles Edouard Guillaume (1861-
1938) in 1914, just as internationalism, which Fried and his colleagues rated as "the most characteristic and
impressive phenomenon of the 20th century," was about to dissolve in war. Guillaume was director of the
Bureau international des poids et mesures, established in 1875. Supported at about the level of the physical
institute of the university of Berlin by a consortium of states, it maintained metrological standards, tested new
products, and did some research, notably precision measurements on the properties of nickel-steel alloys, for
which Guillaume received the Nobel prize in 1921.

The logical apex of the international movement in physics and chemistry would be an organization to
coordinate the world's work in physical science in accordance with a grand research plan. In 1912/13 this
logical space was filled by the generosity of Ernst Solvay (1838-1922), the wealthy Belgian baron of
chemical industry and would-be theoretical physicist. He had already endowed institutes for physiology and
sociology in Brussels when Nernst persuaded him to underwrite an international council of distinguished
physical scientists to discuss the burning questions of the day – the theory of radiation and quanta. Solvay
considered the meeting an opportunity to air his views before the greatest experts in the world; although they
ignored him, they made so great a success of their meeting that Solvay was inspired to found an Institut
international de physique to continue the discussion. It would spend his money on fellowships for young
Belgians, small research grants for Europeans, and more meetings directed at questions in physics and
chemistry related to realizing his grand scientific vision. He flirted briefly with supporting Ostwald's
association of chemical societies, but chose to create an institute for chemistry parallel to that for physics.
Each would have a life of thirty years, time enough, according to Solvay's calculations, to complete the
physical sciences.

Early in 1914, when the physics institute had begun its work, the French news magazine of science, La
nature, described it as part of the vast international movement for the founding of scientific organizations, and
"the most solid attempt to date to create an organization [for the management and systematic work] of a
science in its entirety." The institutes did manage to define, if not to coordinate, salients on the research front
for a quarter of a century.

16
3. The Discipline

Solvay's lecture in Brussels in 1911 exploited many generally accepted ideas. He outlined a chemistry and
physics united through electric and energetic principles and applied to physiology (and ultimately to
sociological questions) in a strictly materialist and positivistic way. True science had no room for mystery,
vital spirits, souls, or grand constructions apart from Solvay's, which rested on laws unambiguously drawn
from and confirmed by experience. "Each particular human group, and even the entire human race, should be
thought of as an organized chemical reaction." Chemistry and physics are the sciences of "positive and
negative ether, atomistically and invariably cubifiable." How so? "Spatition and superficialization are
energetically produced solely by molecular contacts." Beneath this rubbish are assertions of the scientism,
and echoes of the debates over ether, energy, and atoms, that resounded in meetings of physical scientists
around 1900.

Scientism and descriptionism

The twin pillars of late 19th-century scientism were materialism and positivism, the doctrines that the goal of
science is the establishment of laws of the behavior of matter and that science so conceived contains all
"positive," that is, true, knowledge. These pillars rested on bedrock composed of the teachings of Auguste
Comte (1798-1857), the laws of evolution and chemical combination, and the great syntheses of mid 19th-
century physics: the theory of light with those of electricity and magnetism, of heat with that of motion, and,
via the laws of energy, of Newtonian mechanics with all of them. The increasingly prominent application of
physics and chemistry to multiplying material "goods," and the consequent identification of science with the
crass and the commercial, confirmed the view that science had become a menace to traditional values. Here
Solvay, who planned to use business methods to establish his materialist program, made an egregious
example.

Although academic scientists provided some of the material for materialism, most of them distanced
themselves from scientism. Especially the assimilation of science with technology, whether owing to
reasoned epistemology or to simple confusion, aroused their opposition. The assimilation threatened status.
The German scientist who considered himself an incarnation and conveyor of the highest aspirations of the
German people had something to lose by identification with "Materialismus und Amerikanismus.'' Physicists
who emphasized the humanistic education that they shared with high government officials and that separated
them from most commercial and industrial men were among the bitterest opponents of efforts to upgrade
technical schools and to introduce technical facilities into the universities.

In the United States, prominent men of science tried to distance themselves from a technology that

17
compromised their struggle to obtain public recognition for the value of disinterested or pure inquiry. The
popular science monthly complained of those who would degrade science to a "low, money-making level;''
the American Association for the Advancement of Science heard about the wastage of American intellect "in
the pursuit of so-called practical science;'' the physical chemist Noyes feared "scorn at my turning my
attention [from ions] for industrial work;'' memorialists praised defunct physicists who had resisted the
blandishments of industry. According to the distinguished physicist Henry Rowland (1848-1901), the electric
telegraph, apparently the greatest recommendation for physical science, was in fact its greatest danger. A
public that confused the telegraph and the electric light with science might not support a purer physics.

Electro-technics may have begun in discoveries made in research laboratories. But once discovered, the
relevant principles could be exploited by engineers; ministers of education might therefore reasonably decide
to direct public and industrial moneys away from academic research into higher technical training. In Britain,
the technical schools were growing and multiplying far faster than university facilities for science. In France,
the Sorbonne professoriate and the Paris Academy, recognizing a threat to their subventions as well as to their
culture, rejected proposals to add technical courses to the curriculum. The interests at stake appear clearly in
the campaign to obtain for the German Technische Hochschulen the right to grant the doctor's degree. The
universities, including many science professors, opposed the concession on the grounds that it would cheapen
the degree intellectually and socially, admit the upstart technical schools to equality with the ancient seats of
culture, and diminish teaching and research funds previously reserved to the universities for the monopolistic
production of Ph.D.s. As reported earlier, these arguments did not persuade the Kaiser. In announcing the
award of the coveted right in 1899, he praised technical education for its part in securing Germany's
prosperity and for its social role, now abetted by the doctor's prize, of bringing sons of good families to useful
ways of life. In response, the rector of the Berlin Technische Hochschule defined the social program of his
school as the stamping out of humanism from those "trade schools for teachers,'' the universities; scientists
serving industry needed physics and chemistry as tools to improve products, not as knowledge to ornament
the mind.

The uncomfortable adjustment to their new circumstances reached by university physicists and chemists
around 1900 continued in force until the war. Some alleviation occurred through increases in research
support, progress in physical theory, and redefinition of the roles of university, technical school, government
bureau, and industrial laboratory. The Nobel prize probably also helped to protect against the "drowning out
of the gentle music of natural laws by the trumpet blasts of technical success." That at least was the
assessment of van't Hoff, the first laureate in chemistry, who applauded the decision to extend rewards to
discoveries of use in science as well as in practice. "In our time of industrial competition among the various
nations the other side had already acquired a great predominance.''

18
There remained the problem of dealing with detractors who confused the mechanistic picture derived from the
principles of energy, evolution, chemical physiology, and psycho-physics with scientific materialism and
condemned both for destroying faith and poetry. To these detractors, the teaching of science undermined not
only the family, ethics, aesthetics, and religion, but also the values of allegiance and sacrifice on which the
modern state depended; socialism and even anarchy would soon follow. Avant-garde degenerates, eager to
free themselves from impending determinism, joined forces with their fierce opponents, the spokesmen for
established religion and education, the anti-republicans and ultramontanes, to combat the threat.

An incident of 1895 exemplifies the thrust and parry of science and scientism at the fin de siècle. A famous
French literary critic, Ferdinand Brunetière (1849-1906), inspired by a visit to the Vatican, charged science
with moral and philosophical bankruptcy: "we cannot draw from the laws of physics or the results of
physiology any way of knowing anything.'' The scientific establishment responded that science has no
dogmas. Eight hundred non-dogmatists, including forty senators and seventy deputies, ate a banquet in
answer to Brunetière and in honor of Marcellin Berthelot (1827-1907), the positivist chemist and materialist
minister of state. In his toast, the president of the chamber of deputies revealed Brunetière's inspiration: "The
formula ‘the bankruptcy of science’ [he said] is above all a phrase of the political order, a means of
reactivating clerical reaction.''

Sensitive members of the British Association for the Advancement of Science perceived a similar threat in a
speech that Lord Salisbury gave before it in 1894. Salisbury observed that honest physical scientists admitted
that they knew very little about the nature of things. He deftly moved from confession of present ignorance to
affirmation of continued ignorance, from "ignoramus'' to "ignorabimus.'' Listeners like the applied
mathematician Karl Pearson (1857-1936) perceived that Salisbury's insistence on future impotence could only
encourage the inference that, since science did not have the answers, religion must be allowed its say. If so,
the danger was real, for Salisbury, or, to give him his due, Robert Arthur Talbot Gascoigne Cecil (1830-
1903), was the Prime Minister of England.

Or, perhaps, the greatest danger threatened from the other side, not from the Salisburys but from the
socialists. That was the long view of the historian Charles Pearson (1830-94), who predicted in 1893 that the
Western nations, having reached the limits of colonial expansion, would stabilize internally through
socialistic regimes, and that socialism would subvert science along with everything else. The new states
would not end science but rather promote work of detail, diffusion, application, and measurement, the dry
labor of specialists. Adolf von Harnack (1851-1930), author of the official history of the Berlin Academy of
Sciences and, in a more optimistic mood, a contributor to the founding of the Kaiser-Wilhelm-Gesellschaft,
thought that already in 1900 physicists had withdrawn from the big questions and that their attention to
insipid detail had cost them their intellectual ascendency.

19
Physical scientists did not find it easy to meet the opposing charges—to which they frequently referred—that
their science was both enervated and subversive. On the one hand, wishing to affirm their achievements both
for personal satisfaction and public support, they could not maintain that they were at the beginnings of their
science. On the other hand, they could not allow that they knew enough for a profound, satisfying,
humanistic description of nature. The condition of science recommended humility. At the jubilee of his
professorship in 1895 Lord Kelvin (William Thomson, 1824-1907) had offered one word to characterize his
life's work: "failure." He knew no more, he said, about the relations between ether and matter, or about either
on its own, than he had at the outset of his career, and neither Kelvin nor anyone else knew more about the
nature of gravity than Newton did when he stopped feigning hypotheses about it. The strategy adopted by
spokesmen of science to elude the embrace of technology, repudiate the charge of scientific materialism, and
sound the right note of vigor was modesty. They declared that science could not arrive at the essence of
things. Although the discoveries of physics and chemistry might lead to inventions subversive of the old way
of life, nothing they taught could challenge humanistic values, religion, or ethics. The doctrine that science
aims not at discovering the essence of matter but only at describing its appearances may be called
"descriptionism'' after the Victorian word "descriptionist,'' "one who professes to give a description.''

The keynote address to the International Congress of Physics of 1900 was one long and witty endorsement of
descriptionism. The speaker, Henri Poincaré, alluded to the new social role of physics. The task of
physicists, he said, is to improve the "output of the scientific machine;'' they labor more like librarians than
like philosophers, assembling catalogues of known facts, identifying gaps in the collection, and buying new
facts from nature with the improving resources of their laboratories. The success of their undertaking should
be measured by the completeness, convenience, and elegance of the catalogue of the library of science. In
classification there is no truth, only utility; the items of permanent value in the library are facts and
phenomenological relations. The physicist can change his catalogue or description—or, to drop the metaphor,
his theories and models—to suit his convenience. He does not, can not, attain Truth.

Elements of descriptionism may be found in Kant and Comte, and in the instrumentalist physical science of
the late eighteenth century, to go back no further. Its chief proximate sources within physical science in the
late 1890s were the practice of Kelvin and James Clerk Maxwell (1831-1979), the scruples of Gustav
Kirchhoff (1824-87), the jeremiads of Ernst Mach (1838-1916), and the opposition to atomism of Berthelot,
Ostwald, and other skeptical chemists. Eschewing inquiry into the fundamental nature of force, Kirchhoff
had declared in 1875 that the principles of mechanics were not laws laid down by nature, but descriptions
invented by physicists. At the same time, Mach was warning his colleagues that by taking their theories too
literally they had missed the basic truth that their science has nothing to do with truth. Neither Mach's nor
Kirchhoff's views prospered among continental physicists before Heinrich Hertz' (1857-94) demonstration of

20
the electromagnetic radiation predicted by Maxwell directed their attention to the British method in physics.
Many professed to be shocked at what they found.

Maxwell appeared not to reverence rigor. In keeping with the approach of the mathematical physicists trained
at Cambridge, he simultaneously employed contradictory mechanical analogies, or jumped from one model to
another. Poincaré noticed the "feeling of discomfort, even of distress'' with which Maxwell's acrobatics
afflicted French readers. Hertz acknowledged that he had never managed to follow the derivation of the
equations from the model. But success brings its own recommendation. Continental physicists desired to
know how Maxwell could erect an unshakeable structure—the fundamental equations of electrodynamics—
from a rickety scaffolding of borrowings from the Victorian machine shop. Hertz and Poincaré answered
that, since the models were intended only as descriptions, they did not have to be exact or consistent, and
could be chosen and discarded at will, without menacing the advances achieved with their aid.

The wide descriptionist consensus of 1900 accordingly did not include agreement about the best mode of
description. The great majority held the reduction of all physical phenomena to the principles of mechanics
as the prime desideratum, and even as the definition, of their subject. The principle of energy conservation
seemed to guarantee success; the unification of optics and electrodynamics had reduced the reductions
required; the gas theory showed the way forward. Or, rather, one way forward. Perhaps it would be more
fruitful to avoid messy details and merely state the equations of electrodynamics and thermodynamics as
formal consequences of an abstract formulation of general mechanics, such as the principle of least action?
That was the recommendation of Poincaré and Helmholtz. They observed, moreover, that when least action
holds, mechanical descriptions can be multiplied at will. They consequently deemed it a waste of their time to
look for any. Those who found this method too abstract might prefer illustration via a general mechanical
system, say a swarm of atoms or whirlpools in an ether. That was the practice of Alfred Cornu (1841-1902)
and many others, and, in some of their moods, of Kelvin and Thomson

The strongest opposition to mechanical reduction came from Mach and Ostwald. According to Mach, even
the concept of matter had outlived its usefulness: the goal of science should be an economical statement of
relations among sensations, not reification of wishful thinking. In a famous speech before the Deutsche
Naturforscherversammlung in 1895, Ostwald urged his colleagues to abandon the atomic theory and other
explicit pictures proved unhealthy by thermodynamics: "thou shalt not make unto thee any graven image or
any likeness of anything.'' He suffered for his advice. His chemist friends told him that they could not solve
their problems without the concept of atoms. German physicists had only contempt for his attempt to replace
matter with energy. The French referred him to Poincaré for lessons on method. George Francis Fitzgerald
(1851-1901) answered for the British: "That might be alright for a German, who plods by instinct, but a
Briton wants emotion in his science, something to raise enthusiasm, something with human interest,'' viz., a

21
mechanical model.

The only alternative to mechanical reduction that had an important following around 1900 was the democratic
proposal to allow mechanics what it could handle and deliver the rest to electrodynamics. Hendrik Antoon
Lorentz (1853-1928) provides a distinguished example. Unable or uninterested to find an adequate
mechanical model of the ether, he loaded the concept with whatever characteristics, mechanical or not, he
needed to explain the properties of electromagnetic forces and their interactions with matter. Even physicists
who retained the hope for a mechanical world picture followed Lorentz' lead in everyday problems, treating
electromagnetic entities as irreducible, perhaps after having tried their hand at designing a mechanical ether.
Before Einstein announced the futility of such work as the reduction of charge to "right-and-left-handed self-
locked intrinsic wrench-strains in a Kelvin gyroscopically-stable ether'' (Joseph Larmor's (1857-1942)
approach in the 1890s, still urged by Oliver Lodge (1851-1940) in 1913), the increasing complexity of the
mechanical ethers had prepared the way for their demise. A critical inventory of these ethers, drawn up into a
doctoral thesis by a student of Planck in 1906, numbers 2 families, 6 genera, and 25 species, all complicated,
none satisfactory.

The proposal to reduce mechanics to electricity, with which Planck and Lorentz flirted, briefly challenged the
program of mechanical reduction just after 1900 as a result of experiments on cathode rays that showed the
dependence of electromagnetic mass on velocity predicted by theory. Inherent complexity, the acceptance of
the theory of relativity, and interest in atomic structure and quantum theory quickly erased the
electromagnetic world picture after 1910. But around 1900 the competition among mechanical reduction,
electromagnetic reduction, and energetics confirmed the descriptionist stance that physical scientists could not
pronounce on the nature of things. This meek but sound epistemology also could be used to smooth external
relations. Those who took Poincaré's analogy between science and librarianship to heart gave the public no
ground for assimilating science with scientific materialism.

Perceptive contemporaries understood the maneuver. The physicist Woldemar Voigt (1850-1919) of
Göttingen: "The criticism of people outside the profession [prompted]...the various defensive efforts of the
last few years, among which...Poincaré's Science and hypothesis has taken a prominent place.'' The
philosopher of science Eduard von Hartmann (1842-1906): "The more physics keeps its completely
hypothetical character in mind, the better will be its scientific reputation in public opinion." The politician
Georges Sorel (1847-1922): "It seems to me that [scientists] have adopted this skeptical attitude...because
they believed that they would increase the confidence that men have in the results of science while freeing it
from a compromising alliance.'' The writer Paul Claudel (1868-1955): "What a deliverance for the scientist
himself, who henceforth will be able to devote himself in all freedom to the contemplation of things without
having the nightmare of an ‘explanation’ to maintain!''

22
The Borderlands and the Microworld

Ways in. By the eve of World War I, the great problems of physical science that had crystallized around 1870
had become the research front in much of basic physics and chemistry. The great problems concerned the
relationship of matter and electricity (or ether), and the relationship of form to function. The first problem,
which related primarily to physics, was often approached in a reductionist mode: could all electrical
phenomena be reduced to processes in the ether, and could the ether ultimately be described by concepts
derived from the behavior of ordinary matter? The second problem, which pertained primarily to chemistry,
was often approached in an architectural mode: how could atoms be connected together so as to mimic and
predict the results of chemical combinations and dissolutions? In one form or another, these problems had
vexed natural philosophers since the seventeenth century, but only around 1870 did they receive a sharp
enough formulation to be accessible to solution by the tools and methods then just becoming available to
scientists.

The sharpening occurred, on the one hand, through Maxwell's synthesis of electricity and magnetism,
published in definitive form in 1873. He claimed to have reduced the one to the other and light to both; for
those who followed him, light and heat became processes in the same suppositious ether that propagated other
electromagnetic disturbances. Moreover, his mathematical derivations and illustrative models suggested that
the ether might obey the laws of ordinary mechanics. This extraordinary synthesis had the philosophical
beauty, and practical difficulty, of having no place in it for the old concept of electrical fluid or charge, on
which most people fashioned their ideas of electricity and magnetism.

The problem of the relationship between form and function in chemical molecules began its sharpening with
August Kekulé's (1829-96) invention of the benzene ring, announced in 1865. Together with the concept of
the tetrahedral quadrivalent carbon bond published independently by van't Hoff and J.A. Le Bel (1847-1930)
in 1874, the ring made the determination of chemical isomers a matter of geometry. The combination also
guided the complicated syntheses that underwrote the organic dye industry, to which Kekulé himself gave the
key with his synthesis of the basis of the aniline dyes in 1872. Despite these grand concepts, however, the
architectural mode could not predict or account for the intricacies of valence or the process of chemical
combination. It turned out that the physicists' problem was in part architectural and the chemists' problem in
part reductive; that the solutions to both required a theory of atomic structure incorporating elementary
electrical charges.

This realization came largely as a consequence of investigations in the borderland of physical chemistry that

23
exploited the concept of "ion" developed during the 1880s around problems in electrolysis. Both physicists
and chemists contributed to the solution. In the epochal year 1870, F.W.G. Kohlrausch (1840-1910), a future
director of the PTR, published his finding that the conductivity of sulfuric acid increased with dilution. He
thus indirectly supported the concept of a solution proposed by Rudolf Clausius (1822-88) in 1857, that is, a
state of dynamic equilibrium between solute molecules and their separated charged constituents. In his
doctoral dissertation at the University of Uppsala in 1884, Arrhenius combined Kohrausch's results and other
information to assert that the process supposed by Clausius increases with dilution until all molecules have
been disassociated. The separated charged fragments are the ions.

Arrhenius' insight, developed further with the help of Ostwald and van't Hoff, and a strong anaolgy between
the behavior of a dilute solution and a perfect gas, produced the intellectual capital to launch the journal
Zeitschrift für physikalische Chemie in 1887. The concept of dissociation, with its bizarre consequence that a
strong solution of a good electrolyte contained few solute molecules, met opposition. But as it proved its
power in fields far from its origin, for example, in the theory of osmotic pressure, it obtained wide acceptance
and application. Electrical discharges through dilute gases, especially the "cathode rays" whose effects
became conspicuous in high vacua, offered a fertile field for ionic analysis. Like electrolysis, the study of gas
discharges had been advanced during the 1870s and 1880s by the introduction of new laboratory techniques,
of which the mercury vapor pump, invented by the instrument maker Johann Geissler (1814-79) and
improved by William Crookes (1832-1919), was the most productive. Invocation of ionist ideas to explain
the appearances in the discharge tube revealed that the analogy between electrolytic ions and the suppositious
charged particles making up the cathode rays failed in one essential point. As J.J. Thomson and others
showed around 1897, the cathode-ray particle had a ratio of charge to mass (e/m) about a thousand times as
large as that of the lightest electrolytic ion, hydrogen. Thomson interpreted this result to indicate that the
cathode-ray "ion," which he called a "corpuscle," represented matter in a state of complete dissociation.

In 1881, in a famous lecture on Faraday delivered in London, Helmholtz had observed that if matter is
atomistic, so is electricity. The electrolytic ion thus offered a strong hint about the relationship between
electricity and matter and a locus for further investigation. Thomson's corpuscle, to which he assigned the
unit electrical charge (the "electron"), became the definitive test and probe of theories of electricity and
atomic structure. Einstein's relativity, the nuclear atom, the quantum theory of the atom, all descended from
concepts of the cathode-ray particle. By 1900, study of ions and discharges in gases had brought to light not
only the corpuscle-electron, but also x rays and radioactivity with its a, b, and g rays. By the same time, the
techniques of spectral analysis, a field invented through the collaboration of Kirchhoff and his chemist
colleague Robert Bunsen (1811-99), yes, around 1870, had become powerful enough in the hands of Pieter
Zeeman (1865-1943) to detect the presence of corpuscle-electrons within atoms.

24
The International Congress of Physics of 1900 lumped together the several lines of investigation opened by
the study of ions and gas discharges in one session. These fields, carved from the borderland of physical
chemistry, may be considered a province of the microworld; or, to change the analogy, keys to it. The
remainder of the papers delivered at the Congress, some three-quarters of the total, had to do with traditional
subjects in mechanics, optics, electricity, magnetism, and geophysics. Figures derived from the abstracting
journal Fortshritte der Physik for the next decade (1901-10) show the same distribution of labor: physicists
gave around 75% of their attention (as measured by the papers they wrote) to macroscopic topics well away
from the borderlands and the microworld. Probably the predominance of traditional topics in chemistry was
even greater.

It was, however, the newer subjects that gave direction to physical science, that captured imaginations
and Nobel prizes. As appears from Table VI, which distributes the prizes awarded in physics and
chemistry between 1901 and 1910 thematically, the usually conservative judges in Stockholm almost
entirely reversed the proportion between novelty and tradition found in the scientific literature:

Table VI
Thematic allocation of Nobel Prizes
in physics and chemistry, 1901-1910

Borderlands and Microworld


Cathode and x rays Röntgen, Lenard, Thomson
Atoms Lorentz/Zeeman, Rayleigh, Ramsay, van der
Waals [Planck]
Radioactivity Becquerel/Curies, Rutherford
Physical chemistry Van't Hoff, Arrhenius, Ostwald
Traditional Fields
Physics Michelson, [Lippmann], Marconi/Braun
Chemistry Fischer, Baeyer, Moissan, Wallach

The brackets around the names of Planck and Gabriel Lippmann (1845-1921) signal that Planck, not
the eventual winner Lippmann, was the choice of the Nobel Committee for Physics for 1908. Had the
Swedish Academy of Sciences accepted the recommendation of its committee, as it usually did, the
borderlands and the microworld would have received 70% of the Nobel prizes in physics and
chemistry given between 1901 and 1910.

A few details about the borderlands may usefully be arranged here so as to support the proposition
that the progress of European physical science at its heyday occurred largely where physics and

25
chemistry overlapped. The arrangement has three divisions: "togetherness," which presents fields
developed by physicists and chemists in collaboration; "strength in numbers," which discusses their
common criterion for the reality of microphysical entities; and "blurs and jumps," which describes
paradoxes they uncovered about the strange new world they had entered.

Togetherness. "The connection between physics and chemistry is very close." Thus Marie Curie
(1867-1934), lecturing in 1913 about the latest news about radioactivity. Chemists and physicists had
entered this quintessential borderland literally hand-in-hand; for example the discoverers of radium,
Marie Curie and Pierre Curie (1859-1906), and Rutherford and his chemist colleague Frederick
Soddy (1877-1956), who together, in 1902, uncovered the tendency to self-destruction of radioactive
substances. There followed a decade of labor, mainly of chemists, to identify the chemical nature of
the decay products, most of which they could not fit into the table of the elements. By 1913, when
Marie Curie spoke the words just quoted, the concept of isotopy had just crystallized in the minds of
several radiochemists and a few radiophysicists had anchored the distinction between radioactive and
chemical processes in the nuclear atom. Their joint work resulted in the concept of atomic number,
which nicely expressed the conceptual distance they had come since 1870, when Dmitry Mendeleev
(1834-1907) had just ordered the elements by their atomic weights.

Mendeleev's classification had had no satisfactory explanation in the science of its time; indeed, many
chemists then refused to grant objective existence to the atoms whose experimentally determined
relative weights provided the order in his system. Nonetheless it had worked to predict the properties
of a few elements for which Mendeleev had left spaces, although to secure strict periodicity he had to
violate his ordering principle at several places. The radiochemists and atom builders of 1913
recognized that Mendeleev had been lucky: the ordering principle was atomic number (Z), not atomic
weight (A). They had found several radioelements characterized by the same Z and different values
of A, and others with the same A and a range of Z's. Expansion of the scheme to all elements was in
hand when the war broke out. As a discovery, isotopy ranked in logic if not in depth with that of
thermodynamics: in each case physical scientists realized that a single concept (atomic weight,
energy) required another (atomic number, entropy) to provide an adequate description of the
phenomena that fell within its purview.

In an entirely different direction the omni-competent ion pushed investigators of radioactivity out into
the universe. The old subject of atmospheric electricity received a new life with the discovery, in
1901, of the spontaneous ionization of air within a closed vessel. Johann Elster (1854-1920) and
Hans Geitler (1855-1923), two uniquely productive gymnasium teachers, traced the cause of the ions
to rays from radioactive substances in the earth's crust. Measurement of the intensity of the radiation

26
with height above the surface at first showed the decline expected if it arose from the earth. As
investigators climbed the Eiffel tower and mounted in balloons, however, they discovered a reversal:
the decline stopped and became an increase. In 1912 Victor F. Hess (1883-1964) diagnosed the cause
of the excess ionization in a "penetrating radiation" from beyond the atmosphere. Thus the ion
disclosed the cosmic rays, a bond between earth and the universe, as well as the intimacy between
physics and chemistry.

Chemists and physicists also entered the strange world of the quantum together. Planck had
approached the subject of heat radiation after years spent on chemical thermodynamics, and Nernst,
who engineered the first Solvay council, had been a collaborator of the ionists Arrhenius and
Ostwald. Nernst's interest in quantum theory had been aroused by Einstein's application of Planck's
radiation formula to the specific heats of solids at low temperatures. At the time, anything related to
low temperatures fell into Nernst's research program, which centered on his "heat theorem" (1906).
This theorem, which he liked to call the third law of thermodynamics and proffered as the solution to
the pressing problem of calculating chemical reaction rates from thermal data, made testable
predictions about low-temperature behavior.

The considerations that had brought Nernst to quantum theory continued to drive his and others'
research after the Solvay meeting. His laboratory carried on its search for confirmation of his
theorem in the vanishing of all rates of physical change as the temperature goes to absolute zero. He
and a student, Frederick Alexander Lindemann (1886-1957), improved Einstein's formula for specific
heats; so, in 1912, did the physical chemist Peter Debye (1884-1966) and, independently, the
physicists Max Born (1882-1970) and Theodore von Kármán (1881-1963), all then at the threshold of
distinguished careers. More importantly, the quantum theory offered a way to the direct computation
of the entropy of a gas; the physical chemists Otto Sackur (1880-1914) and Otto Tetrode of Berlin
took the way in 1912, and Sackur's former student and Einstein's collaborator Otto Stern (1888-1969)
followed in 1913. The theoretical calculations confirmed the measurements made in Nernst's
laboratory.

Although Nernst's program was the best prepared to profit from development of quantum theory,
physicists and chemists pursuing other research came to see its relevance to their work around the
time of the first Solvay council. For example, Jean Perrin (1870-1042) followed up Einstein's
suggestion that photochemical reactions occurred in consequence of absorption by the reactants of a
quantum of radiant energy; the suggestion related to Einstein's theory of the photo electric effect
(1905). Another line attempted to apply quantum concepts to a realm closer to Planck's original
subject: radiation. Between 1911 and 1914, Niels Bjerrum (1879-1958), who had studied with Nernst

27
and become a lecturer on inorganic chemistry at the University of Copenhagen, deduced relations
between the specific heats and the band spectra of gases by quantizing the energy of the rotating
dumbbells with which he modeled molecules.

Proceeding in the other direction, the physicists J.J. Thomson and, following his lead, Niels Bohr
(1885-1962), took special trouble to show how their atomic models could help solve the problem of
chemical affinities. In his Corpuscular theory of matter (1907), Thomson described how neutral
model atoms filled with corpuscles could have electropositive or electronegative properties; how the
degree of positivity (or negativity) would change by the addition (or subtraction) of a single corpuscle
or electron; how groups of atoms with different numbers of corpuscles arranged in rings could mimic
chemical periodicity; and how, in agreement with the rule devised by Richard Abegg (1869-1910), a
former collaborator of Ostwald, Arrhenius, and Nernst, in 1904, the positive and negative valences of
a given element could sum to eight. He discussed the process of transfer of corpuscles from one atom
to another to make a molecule and showed how unions of atoms of the same chemical species might
involve the gain and loss of an electron. Meanwhile, Alfred Werner (1886-1919), who in 1890 had
applied the idea of a tetrahedral bond to compounds of nitrogen and so counted their isomers,
introduced secondary directed valences to account for complex double salts, hydrates, and other
"coordination" compounds.

Bohr adopted molecular models of Bjerrum's type in which two positive nuclei bind via electrons
circulating in a plane between them. From his rules of atomic structure, he calculated the energies of
the different configurations. The calculations allowed the union of hydrogen atoms but not of helium
atoms (the energy of the molecule exceeded that of the separated atoms in the case of helium but not
of hydrogen). Bohr's quantitative molecule-building, as opposed to Thomson's qualitative pictures,
was unprecedented.

Bohr's molecular model of 1913 did not require the transfer of an electron from one atom to another.
It thus did not conflict with the suggestion then recently put forth by physical chemists at the
Massachusetts Institute of Technology. Their leader, Ostwald's former student Noyes, had brought
home from Leipzig the problem why some substances make strong electrolytes. This problem in turn
focused his attention on electron bonding, in which he and his associates William C. Bray (1879-
19460 and Gerald E.K. Branch (1886-1954) at first followed the lead of Thomson's Corpuscular
theory of matter. Bray and Branch insisted, however, that some chemical bonds – indeed most of the
bonds in the organic world – did not involve the transfer of an electron.

Their former colleague Gilbert N. Lewis (1875-1946) developed their idea of a non-polar bond

28
(1913), which during the war (1916) he reconceived as a shared-pair bond. According to Lewis, one
or more electrons might be transferred during chemical combination so as to spend most of their time
around one of the constituent atoms or atomic groups; such a bond when broken in solution would
leave charged fragments (ions) and, depending on the ease of the break, make a good or mediocre
electrolyte. In other cases the bonding pair might spend equal time with the molecular constituents
and resist breakage and/or return to their parent atom in solution. Lewis had long since entered the
microworld on behalf of his students at MIT by representing atoms as cubes with electrons in the
corners. In his eight-fold way, the occupied corners indicated the atom's valence(s). In 1916 he
married the cube to the shared-pair bond (by attaching atoms along sides of their respective cubes)
and set the stage for a fight between his static model and the dynamical structures developed by
physicists. That too was a sort of collaboration.

A fine final example of borderland work was the discovery in 1911-1913 of the diffraction of x rays
and the means of determining the placement of ions in diffracting crystals. Physicists at the
University of Munich under Max von Laue (1879-1960) detected the diffraction; physicists William
Henry Bragg (1862-1942) and William Lawrence Bragg (1890-1971) determined parameters of the
crystals. X-ray spectroscopy immediately became a powerful tool for investigating atomic structure
in the hands of Rutherford's former student H.G.J. Moseley (1887-1915) and, as developed by
physical scientists from several disciplines, of molecular arrangements within chemical compounds.
The results confirmed the spatial structures derived by Le Bel, van't Hoff, Werner, and others from
analyses of chemical behavior.

Strength in numbers. Although Michelson has become a figure of fun for his pronouncements around 1900
that the future of physics lay in the next place of decimals, he was perfectly correct. Belief in the microworld
rested on numbers that identified its constituents. The most important of these prepotent numbers were the
charge e and mass m of the electron; the speed of light in vacuo c; the elementary quantum of action h
(Planck's constant); the gas constant per molecule k (Boltzmann's constant); the number of atoms or
molecules per mole N (Avogadro's number) or per cubic centimeter L (Loschmidt's number) of gas under
standard conditions; and the universal spectroscopic frequency R (Rydberg's constant). The charge on the
electron was the most significant of these constants: once in hand it produced, via measurable quantities, the
constants m, N, L, and k, and, via k and Planck's radiation law, h; and, with m and h, via Bohr's theory of
atomic structure of 1913, R. As Planck observed, Boltzmann had not introduced k as a distinct entity,
probably because, as Boltzmann intimated in his reminiscences of his colleague Josef Loschmidt (1821-95), a
pioneer in structural chemistry as well as in the microworld, he did not anticipate that it could be calculated
exactly.

29
In 1910 Robert Millikan (1868-1953) gave the value e = 4.891·10-10 esu on the basis of his famous
experiments in which a falling drop of oil bearing a small multiple of the electronic charge is suspended in a
precisely adjusted electrical field. His was the ultimate version of a technique invented in 1898 by J.J.
Thomson, who used water rather than oil for his droplets and the total charge and mass of the descending
drops rather than a balancing electric field to obtain a value for e/m. Both versions calculated m from the
density of the fluid and the radius of the drops as inferred from their rate of free fall. Thomson gave as his
best value 3.1·10-10 esu. His measurement was part of his effort to secure facts about, and belief in, the
"corpuscle."

Two other methods, likewise based on phenomena newly detected and analyzed around 1900, gave values for
e. One was Planck's theory of blackbody radiation, from which he deduced a value of k and thence of N, from
which e = 4.69·10-10 esu followed from the experimental value of the constant in Faraday's law of electrolysis.
At that time, e determined by Thomson's method ran from 1.20·10-10 to 6.5·10-10. Planck set the task of
refining the value of e high on the agenda of physical research. Confirmation of his value came from an
entirely unanticipated direction, the counting of alpha particles released in the radioactive disintegration of
radium emanation (radon). In 1908 Rutherford and his assistant Hans Geiger (1882-1945) counted the
number n of alpha particles caught on a scintillation screen and determined the total charge Q they carried to
an electrometer. They thus had the average charge of the particles E = Q/n and, on the assumption that all
carried the same charge and that E = 2e, a value of e = 4.65·10-10 esu. The close agreement of this value with
that calculated by Planck was, as Planck modestly put it in his Nobel prize lecture of 1919, "a definite
confirmation of the usefulness of my theory."

The Nobel committees on physics and chemistry agreed with Planck on the importance of determining the
constants of the micoworld. Impressed by the agreement of the values obtained from radiation and
radioactivity, they voted to give the physics prize for 1908 to Planck and the chemistry prize to Rutherford.
Unfortunately for symmetry and the statistics in Table VI, the Swedish Academy learned that Planck's theory
of radiation rested on insecure foundations and rejected the advice of its physics committee. Rutherford's
chemistry prize is an incongruous reminder of the importance ascribed to knowing the constants as exactly as
possible. The Academy anticipated correctly that much depended on knowing the value of e. Bohr's
derivation of the Rydberg constant, the most persuasive result of his theory of atomic structure, depended on
the previous acceptance of the Planck-Rutherford-Millikan value of the charge on the electron.

The atoms themselves emerged from twilight to spotlight through the newfound ability of physicists
and chemists to count them. The definitive deduction of N was the work of Perrin, who became
professor of physical chemistry at the Sorbonne in 1910. Perrin contrived a way to make uniform
gumballs so small that, although still visible in a microscope, they remained suspended in solution

30
between the tug of gravity and osmotic pressure P created by their decrease in number with height.
According to van't Hoff's application of the dissociation theory to osmosis, P = 2nw/3, where w is the
mean kinetic energy of a solute particle and n is their number.

Perrin could calculate P from measurable data about the balls and therewith measure w, which,
according to an inescapable consequence of the kinetic theory, should be kT, where T is the
temperature. Perrin made N = 7.05·1023, about 15 percent off from the value J.D. van der Waal's
(1837-1923) had obtained thirty years earlier by inference from his equation of state for an imperfect
gas. The colloidal particles dance about. Their agitation, the Brownian motion, arises according to
the kinetic theory from an imbalance of the impacts on the particles from the molecules in the liquid
that surrounds them. In 1905 Einstein calculated the mean displacement of the colloidal particles
from the places at which they are first seen as a function of time. The resulting formula includes N
(by setting the mean kinetic energy of a Brownian particle equal to the mean molecular energy) and
quantities Perrin could measure or determine: the mean displacement, the radius of the particles, the
duration of observation, and the viscosity of the solvent. What Perrin called "the rough mean" of his
measurements was 7.0·1023. It agreed fairly well with the value Perrin calculated from Faraday's law
using e as determined by Rutherford: 6.2·1023.

The numbers that physical scientists accepted as a strong proof of the existence and character of the
microworld did not agree perfectly. To take but one example, the range of Thomson's values for the
charge-to-mass ratio e/m for the corpuscle-electron from various gases had the same range of values
(apart from order of magnitude) as the charge-to-mass ratio for the electrolytic ions from potassium to
manganese. Thomson nonetheless declared all corpuscles to be the same. Likewise, Rutherford
assumed, and tried to show, that all alpha particles have the same E/M. There was no place for
extended families of electrons or alpha particles in the microworld. No more was there room for a
sequence of Avogadro's numbers. Discontinuity requires hard edges.

Blurs and jumps. Maxwell remarked under "Atom" in the Encyclopedia britannica for 1875 that the
final constituents of an element had to be as much alike as manufactured articles. The analogy, as
much a tribute to the accuracy of Victorian machine tools as to the contrivances of nature, rested
primarily on then recent spectroscopic investigations of celestial and terrestrial light. It appeared that
iron in the stars produced the same telltale spectrum as iron from ore or from scrap. As Maxwell put
it, the atoms made by nature did not chip or wear away; those of a given element were the same
always and everywhere. The ascription of an internal structure to atoms made Maxwell's analogy
superficially wrong and profoundly right.

31
The presence of a swarm of electrons in an atom suggested a mechanism for radioactive decay: a
gradual loss of energy through ordinary radiation. Atoms of, say, radium, would differ from one
another in their internal energies and the time, calculable in principle, of their inevitable explosion. It
was distressing to discover that radium atoms did not age. Hoping to accelerate the internal process
of decay, Rutherford had his student Howard Bronson (1878-1968) expose radioelements to very high
temperatures. Bronson failed in this hopeless task; but he noticed (in 1904) that the emission of alpha
particles from his samples seemed to undergo random fluctuations. In 1908 Rutherford and Geiger
noticed a similar randomness in their counting of alpha particles, and two physicists at the University
of Berlin, Edgar Meyer (1879-1960) and Erich Regener (1881-1955), confirmed and extended
Bronson's results. Radioactivity, a process apparently internal to atoms, thus joined a growing list of
phenomena, notably the Brownian motions studied by Perrin, in which the grainy structure of matter
expressed itself in fluctuations.

According to Ludwig Boltzmann's (1844-1906) statistical mechanics, which underpinned Einstein's


theory of Brownian motion, fluctuations in the number and velocity of tiny particles in minute
volumes of a gas in equilibrium were expected whether measurable or not. In principle a Laplacian
calculator who knew the necessary molecular data could predict the extent and direction of the
fluctuations. Could the fluctuations in radioactive decay be understood similarly? Disciples of
Boltzmann at the University of Vienna thought not. Egon Schweidler (1873-1948) taught the first
International Congress on Radiology and Ionization that Rutherford and Soddy's law of radioactive
decay, n = n0exp(-lt), admitted the interpretation that in a small interval of time dt an atom of a
radioelement had a probability ldt of exploding. Geiger and Rutherford regarded their experiments as
confirming this interpretation.

Schweidler's colleague Franz Exner (1849-1926) drew the surprising conclusion in his inaugural
address as rector of the University of Vienna in the fall of 1908: the fluctuations that scientists
observed were chance events incalculable in principle. His conclusion fit his theme, "laws" in the
humanities and natural sciences, perhaps too neatly. No discipline, he inferred, could depend on exact
laws; the probability interpretation of radioactive fluctuations indicated that the physicist and chemist,
as well as the historian, faced an ultimate acausality in nature. Maxwell's commercial analogy
updated by this interpretation would read: atoms are like manufactured articles that fail inexplicably
and unpredictably.

Soon physicists detected similar behavior in the emission of spectral lines. After listening to Solvay,
participants in the physics council of 1911 tried to find a place in their science for the quantum of
energy that Einstein and Lorentz had discovered in Planck's equation E = hn (n the frequency of the

32
radiation); and thence to argue that, for the "heuristic" purpose of understanding the photo effect and
other optical phenomena, light could be considered a flow of discrete bundles of energy. The Solvay
discussants considered whether the apparent discontinuity could be referred to the internal motions of
atoms, where physical scientists located the mechanisms of the photo effect and the emission and
absorption of light. Planck tried to limit the damage, by treating absorption of energy as a continuous
process and emission as provisionally statistical. Nothing, he said, precluded the eventual discovery
of a causal mechanism; and in August 1914, in a plenary address at the University of Berlin, he
inveighed against those who took the easy way and denied the existence of strict (causal) laws just
because some physical phenomena appeared to elude them.

Marie Curie and Henry Bragg spoke to a similar purpose. She reminded her colleagues at the second
Solvay council (1913) that the rule of exponential decay indicated that the longevity of a radioactive
atom did not depend on how long it had existed; conceded that the failure to influence the decay by
external force made it very difficult to conceive of a mechanism that would give the law of
exponential decay; and supplied a suggestion by her colleague André Debierne (1874-1979) of just
such a mechanism. Bragg had offered arguments in 1910 for taking x rays to be streams of neutral
particles; two year later he made a contribution to proving their wave-like character that earned him a
Nobel prize; and yet (and rightly) he could not forsake his earlier arguments on the basis of his later
experiments.

Bohr's theory of the production of spectral lines (1913) rejected such temporizing. Both absorption
and emission occur by jumps, discontinuously, in a manner formally incalculable on ordinary
mechanical principles. That same year, in an address to the Société française de physique, Paul
Langevin (1872-1946) discussed the radical novelty of the unavoidable "physics of the
discontinuous." So great was the novelty that it could not be expressed in the guarded positivistic
utterance favored by French physical scientists. Langevin: "The profound change recently
accomplished in physics is characterized particularly by the penetration into every part of our science
of the fundamental notion of discontinuity. Today we must base our conception of the world and our
prediction of phenomena on the existence of molecules, atoms, and electrons. It also appears
necessary to suppose that magnetic moments are all integral multiples of a common element, the
magneton, and that matter can only emit electromagnetic radiation in a discontinuous way, by quanta
of energy....[A] new world has been revealed to us."

33
Sources

Congrès international de physique. 4 vols. Paris, Gauthier-Villars, 1901.

Doremus, Charles A. Progress in electrochemistry in the U.S. since 1900. Berlin: Deutscher Verlag, 1904.
(International Congress of Applied Chemistry, V. Report, 4, 716-43.)

Heilbron, J.L. "Fin-de-siècle physics." In Carl Gustav Bernhard, ed. Science, technology and society in the
time of Alfred Nobel. Oxford: Pergamon, 1982. Pp. 51-73. Contains full references to the quotations given in
the section "Scientism and descriptionism" above.

Institut de Physique Solvay. La structure de la matière. Rapports et discussions [1913]. Paris: Gauthier-
Villars, 1921.

Langevin, Paul, and Maurice de Broglie, eds. La théorie du rayonnemenet et les quanta. Rapports et
discussions. Paris: Gauthier-Villars, 1912. The German edition, Die Teorie der Strahlung und der Quanten
(Halle/S: W. Knapp, 1914), trans. A. Eucken, has an important update by Eucken, "Die Entwicklung der
Quantentheorie von Herbst 1911 bis zum Sommer 1913," 371-405.

Office Central des Institutions Internationales. Annuaire de la vie internationale, 1 (1908-09); 2 (1910-11).
Contains Alfred A. Fried, "La science de l'internationalisme," 1, 23-28.

Société Française de Physique. Les progrès de la physique moléculaire. Conférences faites en 1913-1914.
Paris: Gauthier-Villars, 1914. Contains Marie Curie, "Les radioéléments et leur classification," 100-26, and
Paul Langevin, "La physique du discontinu," 1-46.

Wells, H.G. A modern utopia. London: Chapman and Hall, 1905.

Studies

Barkan, Diana Kormos. Walther Nernst and the transition to modern physical science. Cambridge University
Press, 1999.

Bauer, John, et al. The electric power industry. New York: Harper, 1939.

Cahan, David. "The institutional revolution in German physics, 1865-1914." Historical studies in the physical
sciences, 15:2 (1985), 1-65.

Coen, Deborah R. "Scientists; errors, nature's fluctuations, and the law of radioactive decay, 1899-1926."
Historical studies in the physical and biological sciences, 32 (2002), 179-205.

Dolby, R.G.A. "Debates over the theory of solution: A study of dissent in physical chemistry." Historical
studies in the physical sciences, 7 (1976), 297-404.

Haber, L.F. The chemical industry, 1900-1930. Oxford University Press, 1971.

Hanle, Paul. Indeterminacy before Heisenberg: The case of Franz Exner and Erwin Schrödinger." Historical
studies in the physical sciences, 10 (1979), 225-269.

Hannah, Leslie. Electricity before nationalisation. A study of the development of the electricity supply industry
in Britain to 1948. Baltimore: Johns Hopkins, 1979.

Hirosige, Tetu, and Sigeku Nisio. "Rise and fall of various fields of physics at the turn of the century."
Japanese studies in the history of science, 7 (1988), 93-113.

Homberg, Ernst. "The emergence of research laboratories in the dye stuffs industry, 1870-1900." British
journal for the history of science, 25 (1992), 91-111.

34
Johnston, Jeffrey Allan. The Kaiser's chemists. Science and modernization in imperial Germany. Chapel Hill:
University of North Carolina Press, 1990.

Knight, David, and Helge Kragh, eds. The making of the chemist. The social history of chemistry in Europe,
1789-1914. Cambridge University Press, 1998.

Liveing, Harold C. Andrew Carnegie. Boston: Little Brown, 1975.

Marage, Pierre, and Grégoire Wattenborn, eds. The Solvay councils and the birth of modern physics. Basel:
Birkhäuser, 1999.

Mitchell, B.R. European historical statistics, 1750-1970. New York: Columbia University Press, 1978.

Servos, John W. Physical chemistry from Ostwald to Pauling. The making of a science in America. Princeton
University Press, 1990.

Shenton, Herbert Newhard. Cosmopolitan communication. The language problems of international


congresses. New York: Columbia University Press, 1933.

Taylor, A.J.P. The struggle for mastery in Europe, 1848-1918. Oxford University Press, 1971.

Thackeray, Arnold, et al. Chemistry in America, 1876-1976. Historical indicators. Dordrecht: Reidel, 1985.

Vierhaus, Rudolf, and Bernhard vom Brocke. Eds. Forschung im Spannungsfeld von Politik und Gesellschaft.
Geschichte und Struktur der Kaiser-Wilhelm/Max-Planck Gesellschaft. Stuttgart: Deutsche Verlags-Anstalt,
1990.

Zängl, Wolfgang. Deutschlands Strom. Die Politik der Elektrifizierung von 1866 bis heute. Frankfurt: Campus,
1989.

35
II. Ray Physics and Chemistry

The cathode-ray tube, like the cyclotron, was the leading instrument of physical research of its day
and required and incorporated the results of the latest commercial technology. The cathode-ray tube
also anticipated the cyclotron as a producer of Nobel-prize winners and as an accelerator of sub-
atomic particles. This last similarity hides a profound difference, however. Twentieth-century
physicists built the cyclotron to accelerate particles. Nineteenth-century physicists built what they
later called the cathode ray tube to study gas discharges. They did not anticipate the sub-atomic
world into which their experiments would take them.

Colorful striations appear when an electric discharge passes through a gas under low pressure; with
further evacuation, the lights go out and a fluorescent glow in the walls of the tube becomes
conspicuous. This apparatus, often named after its developer J.H.W. Geissler (1815-79), consisted of
a glass vessel with electrodes at either end connected to an induction coil. Side tubes provided
connections to the all-important pump. In 1850, the best pumps could reach 10-4 at.; by 1890,10-8 at.
The improvement resulted from the perfection of a pump invented by Geissler and considerably
advanced by the mechanic H.J.P. Sprengel (1834-1906) in 1865, which substituted moving droplets
of mercury for the piston of the traditional apparatus.

These pumps at first were fragile and, with their reliance on mercury vapor, dangerous. They became
robust, safe, and reliable through the efforts of entrepreneurs of electric lighting. Lamp filaments last
longer and glow brighter in a mercury vacuum than in a mechanical one. Thomas Edison (1847-1931)
devoted much effort to simplifying, automating, and strengthening the Sprengel pump. William
Crookes (1832-1919), an unusual compound of romantic chemist and shrewd business man, realized
a ten-fold economy of scale by hiring women to make and repair them.

The electrical industry contributed batteries, switches, and insulators as well as inexpensive reliable
pumps to the study of gas discharges. Other new commercial technologies, notably photography,
accelerated the spread of discoveries made in and around the discharge tube. X rays became instant
news through the dramatic picture of Frau Röntgen's hand sent by her husband to the physicists of
Europe in the Christmas mail of 1895. And Henri Becquerel's (1862-1908) discovery of radioactivity
turned on the availability of cheap sensitive photographic plates

1. Cathode and other rays

36
The fluorescent patch near the cathode in high vacua began to engage the sustained attention of
physicists in the 1870s. In 1869 Johann Wilhelm Hittorf (1824-1914), professor of physics at the
then new University of Münster, drew the cathode to a point, found the tell-tale fluorescent patch on
the glass opposite it, and showed that a solid body placed between the cathode and the patch killed the
fluorescence. In 1876 Eugen Goldstein (1850-1930), a former student of Hermann von Helmholtz
(1821-94), found that the same shadow effect occurred with an extended source and that the cathode
rays (his term) appeared to be emitted in straight lines perpendicular to the cathode. Like Hittorf,
Goldstein concluded that the rays were a type of light.

Crookes enriched the study of cathode rays with the concept of a fourth state of matter, which he had
developed to explain the behavior of the radiometer. In a gas so dilute as to show the radiometer
effect, he told the British Association for the Advancement of Science at its meeting in 1879, "we
have actually touched the border land where Matter and Force seem to merge into one another, the
shadowy world between the Known and the Unknown, which for me has always had peculiar
temptations." Crookes gave urgent attention to the fourth state of matter, the "Seat of Ultimate
Realities, subtle, far-reaching, wonderful." He was an active member of the Society for Psychical
Research.

To enroll cathode rays in the fourth state, Crookes construed them as molecules charged at, and
repelled by, the cathode. Owing to the dilution of the gas, they could cross the discharge tube without
hitting other molecules. In this situation, Crookes said, "we seem at length to have within our grasp
and obedient to our control the little indivisible particles which with good warrant are supposed to
constitute the physical basis of the Universe....I venture to think that the greatest scientific problems
of the future will find their solution in this Border Land."

Crookes reshaped the Geissler tube, placed the anode off to the side, and, in one spectacular version,
mounted a Maltese cross made of mica hinged to be movable into the line of fire of the cathode rays.
Where the cross blocked the way, the glass did not fluoresce. He achieved similar results by
deflecting the course of the rays with a magnet. All his evidence identified cathode rays as streams of
charged particles the size of gas molecules.

There was other evidence, however. In experiments done in 1882 and 1883, Heinrich Hertz (1857-
94), like Goldstein a student of Helmholtz', failed to deflect the rays by an electric field generated
between two plates placed outside the tube against the glass. The null result made consistent
modeling impossible: the material model seemed to require charged particles impervious to electric
forces and the light model electromagnetic waves bendable in a magnetic field. In 1886, Goldstein

37
found another sort of ray in the cathode ray tube. He bored holes in an extended cathode to allow
anything flowing towards it to enter a private space beyond. He had a rich harvest: a colorful glow
set up behind the cathode occasioned by what he called "canal rays." They did not bend in a
magnetic field.

The following year another species of ray put in an appearance: the waves predicted by James Clerk
Maxwell (1831-79) and first generated and detected by Hertz. Although Hertzian waves (or wireless
waves) bore little analogy to cathode rays, Hertz regarded both as disturbances in the electromagnetic
ether. In his last experiments, completed in 1893, he showed that cathode rays could pass through
very thin metal foils entirely opaque to gaseous molecules. These experiments gave rise to still
another sort of ray when prosecuted by Hertz' assistant Philipp Lenard (1862-1947). By replacing a
portion of the wall of the discharge tube opposite the cathode with a sufficiently thin aluminum foil,
Lenard could pass the cathode rays out of the tube where they could be studied under conditions
impossible to achieve within it. These transmitted rays, which Lenard could track in ordinary air up
to 8 cm from the metal window by using a phosphorescent screen, soon became know by his name.
The properties of Lenard rays seemed to confirm the model of cathode rays favored by Hertz.

The ever vigilant Hertz had noticed that light from the spark discharge he used to generate wireless
waves affected the length of the spark by which he detected them. He satisfied himself that the cause
lay in the ultraviolet portion of the spark's spectrum. Subsequent investigation by several physicists
including Augusto Righi (1850-1920) and Lenard revealed that ultraviolet light could liberate
negative charge carriers from certain metals (the "photo-effect"). A magnet could deflect these
carriers. They did not seem to be metal dust or gas particles. They seemed to resemble cathode rays.

As the situation grew more complex, a man who prized clear mechanical models came forth to lead
physicists and chemists into Crookes' fertile borderland. This Joshua, Joseph John Thomson (1856-
1940), had mastered mathematics in the Cambridge manner and devised elaborate schemes to avoid
the concept of electrical charge, but he could also work in a humbler style with simple, indeed
sometimes overly simplified, models. As professor of physics and head of the Cavendish Laboratory
in succession to Maxwell and Lord Rayleigh (1842-1919), Thomson concentrated on exact
measurement of electrical standards. He did not develop a research program, however, apart from
stimulating interest in the new ionic theory of liquids, before1895, when the university began to allow
students who had not obtained an undergraduate degree in Cambridge to work there for a research
qualification. The reform brought in several brilliant young men, notably Ernest Rutherford (1971-
1937), C.T.R. Wilson (1869-1959), J.S. Townsend (1868-1957), and, briefly, Paul Langevin (1872-

38
1946). All helped Thomson establish a strong research group centered on the problem of cathode
rays.

While Helmholtz' school collected evidence and rays in favor of the light model, Arthur Schuster
(1851-1934), professor of physics at the University of Manchester, tried to measure properties of the
cathode-ray particle. Bending in a magnetic field could yield the ratio e/m of the problematic
particles' charge and mass, provided that the radius of curvature of the rays R and their velocity v
could be measured: when the field is perpendicular to the velocity, e/m = v/R. R could be measured.
To get v, Schuster supposed that eV = mv2, V being the entire potential drop across the tube, whence
e/m = 2V/R2. In 1890 he observed that the last equation made e/m about 106 emu for nitrogen
whereas, on the assumption that the rays moved at velocities given by the kinetic theory of gases, e/m
= 103. Since the smaller number agreed with his theory of the ionic character of the cathode-ray
particle, Schuster accepted it. His version of the ballistic hypothesis left open the questions how gas
molecules could pass through Lenard's window and why cathode rays did not respond to an electric
field.

In 1895 a young physicist in Paris, Jean Perrin (1870-1942), went far toward removing the second
difficulty by arranging to catch the rays in a metal cup in the discharge tube. The cup accumulated
negative electricity. Thomson then identified the flaw in Hertz' experiment with electrostatic bending
as a space charge on the inner side of the tube wall that screened the rays from the field generated by
the external plates. Thomson put the plates inside the tube. The rays bent. He adjusted the electric
field F so as to annul the effect of an external magnetic field H. In this case the electrostatic
acceleration of a cathode ray particle, eF, equaled the electromagnetic evH: v = F/H; e/m = 107 emu,
about 1000 times the value (e/m)H obtained from hydrogen in electrolytic experiments.

Thomson assumed that the e of the corpuscle, as he re-christened the cathode-ray particle when
announcing his discovery in 1897, equaled eH. Hence m = mH/1000, which Thomson understood to
mean that the corpuscle was a thousandth the size of an atom. So slight and swift a mite might well
be able to pass through solid metal. Thomson's conclusion that cathode rays consist of material
particles much smaller than atoms, each probably bearing the unit electrical charge, proved the
definitive and surprising solution to the problem of the cathode rays. It was soon received
knowledge. Thomson's further conjecture in the style of Crookes, that corpuscles made up all matter,
though neither correct nor widely accepted, proved immensely fruitful.

An extraordinarily serendipitous discovery made in 1896 that owed nothing, except similarity in
apparatus, to the investigation of cathode rays, confirmed Thomson's grand corpuscular theory of

39
matter. A Dutch spectroscopist, Pieter Zeeman (1865-1943), managed to alter the frequency of the
bright yellow lines in the sodium spectrum by applying a strong magnetic force to a discharge
through sodium vapor. His professor, the dean of European theoretical physicists, H.A. Lorentz
(1860-1928), had ascribed spectral lines to "ions" describing circular orbits in or around atoms. From
Lorentz' model and his measurements, Zeeman's could determine the value of e/m of the radiating
ion: it came out around 107. In a paper published in the main British journal for physics, the
Philosophical Magazine, in the autumn of 1897, Zeeman insisted that the Lorentz ion could not be
any of the ions known from electrolysis.

Thomson seized on the magneto-optic effect of Lorentz and Zeeman as confirmation of the ubiquity
of the corpuscle as well as support for the existence of a particle with an e/m a thousand times less
than the hydrogen ion's. He resisted the identification of the corpuscle with Lorentz's ion or with any
other version of "electron" theory, which rested on assumptions unnecessary to the existence of the
corpuscle. He persisted in calling his creation a corpuscle long after most physicists had adopted
"electron" for the omnipresent cathode-ray particle – alias the carrier of the photo-current, the agent
responsible for optical spectra, and the deviable component of the rays from radioactive substances.

2. X rays and radioactivity

Lenard rays contained more than Lenard himself had dreamed of. Towards the end of 1895 Wilhelm
Conrad Röntgen (1845-1923), seeking a new research line after a year's service as rector at the
University of Würzburg, applied for advice to Lenard, who returned some hard-to-get aluminum foil.
In November Röntgen made the observation that led to his capital discovery: a detecting screen lying
on a table fluoresced far beyond the range of Lenard rays. His " new sort of rays" or "x rays" could
not be bent by an electric or magnetic field, traveled in straight lines from the fluorescent spot where
cathode rays struck the glass, and could not be reflected ot refracted. They thus did not qualify either
as light or as matter. Even more astonishing, they could penetrate solid objects opaque to light, like
the human body.

Since Röntgen's apparatus was a standard item in physics laboratories, physicists immediately
confirmed his discovery and, like Lenard but with less reason, kicked themselves for having missed
it. The most consequential follow up came in Paris, where the great mathematician Henri Poincaré
(1854-1912) ventured into experimental design with the provocative suggestion that fluorescing
bodies other than glass might give rise to x rays. Enthusiastic explorers put forward the sulfides of
zinc and barium, among other candidates, but only one, a crystal containing uranium, worked reliably.
This remarkable crystal belonged to Henri Becquerel, who had inherited it from his father, who had

40
discovered its fluorescence. Following Poincaré's advice Becquerel exposed his heirloom to the sun,
put it atop an unexposed photographic plate, and shut them up in a drawer. When developed the plate
showed the expected darkening, arising, so Becquerel told the Académie des sciences in Paris, from
rays stimulated by the fluorescence and able to penetrate the plate's protective wrapping.

After a week in which the sun hid its face from Paris, Becquerel developed the plate expecting to find
it less fogged than usual. He was disappointed. No exposure to sunlight appeared to be necessary;
the crystal gave off something new, spontaneously, which, after identifying their source, Becquerel
called uranium rays and others called Becquerel rays. Besides their ability to penetrate paper
wrappings, Becquerel found that they could be reflected at metallic surfaces and refracted by several
substances. They could not be x rays. All this he had secured by the end of 1896.

Maria Skladowska (1867-1934) or Marie Curie (she had married Pierre Curie (1859-1906), professor
of physics at the Ecole municipale de physique et chimie industrielles, in 1895), was then a student at
the University of Paris. She took Becquerel's discovery as her thesis project. Using an electrical
method developed at the Cavendish, she showed that a certain uranium ore called pitchblende was
three or four times more radioactive (to use the word she coined) than uranium. She reasoned that
pitchblende must contain a strong radiator and she had a ton of it dumped at Pierre's laboratory.
Together they tore the ore apart chemically. One fraction was 400 times more active than uranium.
They named the unknown and otherwise undetectable radioactive agent in this fraction "polonium"
after Marie's native country, Poland. Early in 1899, they had a substance chemically similar to
barium and 100,000 times more active than the same weight of uranium. They called it "radium" and
expected that, like polonium, it would turn out to be a new element.

Radium immediately captured the public imagination. Like x rays, it was a subject of wonder in an
age not yet jaded with wonders. Again like x rays, it seemed providentially provided to cure the ills
of the 20th century. Both were used to treat cancer and skin problems, often doing equal damage to
doctor and patient. But whereas x rays could be generated cheaply and plentifully and immediately
found use as a diagnostic tool, radium, a scarce and expensive chemical product, was applied mainly
to burning out serious tumors. Its cost, driven up by medical demand, made it difficult to procure in
quantity for physical experiments. A lucky few received specimens as gifts from Marie Curie.

One of these few was Rutherford, who took up the study of the rays from radioactive substances after
switching his research from Hertzian waves to x rays. Thomson had discovered that x rays made the
air through which they passed conducting. He invited Rutherford to join him in extending the work.
They designed an apparatus to measure the rate at which x rays broke up molecules of a gas held

41
between the plates of a condenser. When the field between the plates drew off the ions as quickly as
they formed, the current through an electrometer attached to them maximized or "saturated."
Thomson and Ruther ford took the saturation current as the measure of the "ionizing power" of the
rays.

The technique could be applied to any ionizing radiation (the Curies would use it in their
investigation of the activity of pitchblende fractions). Rutherford tried it on uranium rays, which
Becquerel, following up Thomson's discovery concerning x rays, found to ionize air. Rutherford put
some powdered uranium on one plate of his condenser, noted the saturation current, and then piled
sheets of aluminum on top of the radiator. With a few sheets the current fell considerably; more
sheets only cut it slightly. Conclusion: uranium rays consist of an easily absorbed component, which
Rutherford called "alpha," and a more penetrating one, "beta."

Rutherford's discovery aroused great interest among his few fellow students of radioactivity when he
published it in 1899. The Curies showed that polonium emits only alpha rays; Becquerel and others
bent beta rays in electric and magnetic fields, and, consequently, by the criteria established during the
contest over cathode rays, showed that they were streams of particles; Thomson, following his idée
fixe, measured the e/m of the beta particle, which came out, within his usual wide margin of error,
equal to that of the corpuscle; and Paul Villard (1860-1934), a collaborator of the Curies, discovered a
radiation more penetrating than beta in the rays from radium, which, naturally, he labeled "gamma."

These results were all in hand when physicists gathered at their great congress during the
International Exposition in Paris in 1900. Becquerel and the Curies showed their radioactivity, and
Thomson revealed the structure of matter. He pointed out the wonderful consistency of the e/m
values of the particles of cathode rays, photocurrents, and beta rays, all of which showed matter in a
"corpuscular state," the fourth state of matter at the dawn of the 20th century. The physicists and
chemists of 1900 had no way of knowing that their hunt for rays would not be successful again until
they had learned much more about the microworld. But in their enthusiasm they went on to add,
briefly to be sure, N rays, magnetic rays, and black light to the cornucopia they had discovered during
the previous quarter century.

3. Closer Looks

Alpha rays

42
Around 1900 Becquerel managed to destroy the activity of a uranium salt by chemical treatment.
When he examined the salt a few months later, however, it had regained its power. He asked Crookes
to check the bizarre phenomenon. Crookes succeeded immediately by extracting from each of his
uranium salts a radioactive product, "uranium X," differing chemically from uranium. Unlike
uranium, UX emitted only beta particles and gradually lost its intensity. While it died, however, the
uranium salts from which it arose again became radioactive. The death of the extract seemed to
revive its source.

Crookes reported these facts to Rutherford, who by then had become professor of physics at McGill
University in Montreal. Rutherford had a particular interest in disappearing activities since his
disovery, in collaboration with a visiting professor of electrical engineering, R.B. Owens (1870-
1940), of a short-lived radioactive "emanation" emitted from thorium. When Crookes' news arrived,
Rutherford had just begun a collaboration with another visitor, a fresh graduate in chemistry from
Oxford, Frederick Soddy (1877-1956), with whose help he hoped to elucidate the nature of the
emanation. Ruthrford and Soddy tried Crookes' experiment on thorium. They obtained an inert
thorium and a beta-emitting thorium X. In four days ThX had lost half its strength and the inert
thorium had regained half its initial power. By the end of 1902 they could announce that an atom of
thorium or uranium becomes an atom of ThX or UX as it emits an alpha ray, and an atom of ThX or
UX becomes one of a radioactive gas (the emanation), while emitting a beta ray. Similarly the
emanations, by then identified as chemically inert gases of the recently discovered argon family,
throw off active deposits that continue the process of radioactive decay.

Since the chemistry of the emanations differed altogether from those of their parents, the principle of
the periodic table of the elements required that an atom of ThX (say) differed in weight from one of
Th. Since some elements emitted only alpha rays, it followed that the rays must be particulate.
Rutherford contrived to demonstrate their nature with a good piece of radium some 19,000 times as
strong as uranium given him by Marie Curie. He placed the radium at the bottom of a set of parallel
narrow metal cylinders with their axes vertical. With no field, the radium rays produced a saturation
current i in a detector just above the cylinders; with one or the other field on, i decreased because
some of the rays were bent into the cylinders' walls.

The approximate value of (e/m) that Rutherford derived from his bending experiment suggested that
α

e = 2e, m = 4mH. He chose these numbers rather than, say, e = e, m = 2 mH because mHe = 4 mH and
α α α α

helium turned up in ores of uranium and thorium. In 1908 Rutherford and his assistant Thomas
Royds (1884-1955) confirmed his conjecture that an alpha particle is a helium atom minus two
electrons. They confined a sample of radium emanation in a thin-walled vessel within a larger glass

43
discharge tube from which every trace of helium had been removed. In two days signs of the
spectrum of helium appeared in the outer tube, the combination of alpha particles shot through the
walls of the inner tube and stray electrons they found in the outer tube.

In keeping with then current ideas about atomic structure, Rutherford assumed that the alpha particle
was as large as an atom. Of this misconception he disabused himself when, in the belief that he fully
understood their nature, he began to use alpha particles as an instrument rather than as an object of
research. The outcome (in 1910) was that the alpha particle lost the electrons that supposedly made it
a structure of atomic dimensions and, together with all the atoms of all the elements, gained a
nucleus. By 1911 the alpha particle, which had begun as an alpha ray and subsequently materialized
as a heavy particle of atomic dimensions, had become a bare nucleus carrying two positive charges.
The evolution of ideas behind this result took place in Rutherford's laboratory in Manchester, where
he had taken over from Schuster in 1907.

Rutherford and his assistant Hans Geiger (1882-1945) collected the output of alpha particles from
radium emanation and counted the number of alpha particle emitted in the same time using a
forerunner of the Geiger counter. The result made e = 9.3x10-10 esu. If e = 2e, e = 4.65x10-10 esu,
α α

-10
considerably larger than the 3.1x10 then accepted at the Cavendish. It came very close, however, to
the value obtained by Max Planck (1858-1947) from his theory of black-body radiation, a coincidence
that to many physicists greatly strengthened the case for Rutherford's ideas about radioactivity and
Planck's approach to radiation.

Cathode rays and beta rays

Physicists put cathode and beta rays to work as early as 1900. Two lines of inquiry can be
distinguished. The earlier used the rays to test the variation of mass with velocity predicted by the
various competing electron theories. Walther Kaufmann (1871-1947) became the main arbiter.
While an assistant at the University of Berlin and before Thomson introduced the corpuscle,
Kaufmann had begun to measure the e/m of the agent of cathode rays, which he suspected of
possessing electromagnetic as well as material mass, or, perhaps, electromagnetic mass alone.

Since electromagnetic mass µ increases with velocity v, the identification of beta particles as very fast
electrons opened the possibility of testing the predictions of µ(v) of competing electron theories.
Kaufmann developed an excellent apparatus (it had an unusually good vacuum) for measuring µ(v)
and also a predilection for the theory of Max Abraham (1875-1922), his young colleague at the
University of Göttingen, to which he had moved in 1899. Experiments begun in 1901 with rays from

44
a sample of radium furnished by the Curies confirmed, in 1902, that µ was entirely electromagnetic
and behaved as Abraham's theory of a rigid electron predicted. Abraham had competitors, however,
notably Lorentz, whose electrons were deformable.

In 1906 Kaufmann announced that his data ruled out Lorentz's theory and with it Albert Einstein's
(1879-1955) then new theory of relativity, which obtained the same µ(v) by a different route. By
then, however, the theorists felt strong enough to challenge the experiments. Both Planck and
Einstein decided that Kaufmann's data could be interpreted to support the Einstein-Lorentz expression
for µ(v). As the theory of relativity showed, a dependence of mass on velocity did not imply any
theory of the construction of electrons or any opinion about the electromagnetic origin of mass. As
measuring tools in electrodynamics cathode and beta rays demonstrated the velocity dependence of
"mass" but could not discriminate among various electron theories until theorists decided which
version they preferred.

A similar outcome resulted from the use of the rays to investigate atomic structure. Becquerel,
Rutherford, and other investigators had reached the incorrect but useful conclusions that the beta rays
from a homogeneous radioelement all have the same velocity of projection, and are absorbed
exponentially; and they had made rough measurements of the absorption coefficients α in metal foils.
Thomson calculated α(n) on the assumption that absorption resulted from the spreading of a beam of
beta particles owing to collisions between them and the n corpuscles per atom in the target foil.
Comparing α(n) with values of α(A) measured for beta rays by Becquerel and for cathode rays by a
student of Lenard's, August Becker, Thomson found n ≈ 2A, A the atomic weight of the target atoms.
The result agreed well with the analysis, also made by Thomson, of the scattering of x rays.

Further study primarily at the Cavendish of the absorption of beta rays persuaded Thomson that the
process was not exponential. He devised a new theory in which an incident collimated beam of betas
spread in consequence of multiple encounters of the beta particles with target corpuscles and also
with the positive sphere that, in his theory of atomic structure, bound the corpuscles together.
Experiments done at the Cavendish by J.A. Crowther (1883-1950) gave the "improved" result n = 3A.
As in the case of electrodynamics, cathode and beta rays made their contribution to the theory of
atomic structure in a semi-quantitative way.

Canal rays

The nature of Goldstein's brilliant canal rays remained in the dark until 1898, when Wilhelm Wien
(1864-1928) took them up. He had then just transferred to the Technische Hochschule Aachen from

45
the Physikalische-Technische Reichsanstalt in Berlin, where he had done his important work on
blackbody radiation. Succeeding in deflecting canal rays in an apparatus left behind by his
predecessor Philipp Lenard, Wien worked out that they traveled at one hundredth of the speed of
light, carried a positive charge, and had an (e/m)c in the range found for electrolytic ions. In a closer
determination, completed in 1902, he established its maximum value of (e/m)c at 104 emu, the value
for a hydrogen ion.

Two curiosities stood out in Wien's data. The fluorescent screen on which the deviated rays fell
glowed over a wide continuous region, rather than in discrete patches corresponding to the various
values of (e/m)c represented in the discharge tube, and the far edge of the region corresponded to
(e/m)H whether or not the tube contained hydrogen. Wien accepted the explanation suggested by
Johannes Stark (1874-1957), then an assistant at the University of Göttingen, that a "ray" might alter
its charge many times while passing through the fields by capturing or losing electrons in collisions
with the residual gas. As for the constant presence of hydrogen, Wien attributed it to persistent
impurities in his gas samples.

In 1905 J.J. Thomson took up the study of the canal rays, or, as he named them, positive rays, because
they seemed the best way of resolving the nature of positive electricity, which he then regarded as the
principal problem of physics. He improved Wien's apparatus, in which the rays after passing through
the cathode enter a region of superposed electric and magnetic fields so oriented as to cause
deflections perpendicular to the rays and to one another. In this arrangement, particles with the same
(e/m)c but different values of v will hit a screen placed outside the coterminous fields and
perpendicular to the axis of the tube along a parabolic arc.

Thomson's first trials, made like Wien's with a fluorescent screen, gave clear, separated parabolas
corresponding to H+ and H2+, and no patches indicative of any other element irrespective of the gas in
the discharge tube. "The most obvious interpretation of these effects [Thomson wrote in 1907] seems
to me to be that under very intense electric fields different substances give out particles charged with
positive electricity and that these particles are independent of the nature of the gas from which they
originate" Where Wien saw impurity Thomson perceived a building block of matter

Wien was right, however, as Thomson himself showed in a beautiful series of experiments done with
the assistance of Francis Aston (1877-1945): positive rays are a mixture not only of ions of different
kinds, but also of atoms and molecules of the same kind in various charge states. To reduce the
variety, Thomson employed a very high vacuum realized with the help of charcoal cooled by liquid
air. By 1910 Thomson and Aston were obtaining the expected parabolic traces. A typical record

46
required an exposure of one and a half or two hours. These traces provided unequivocal evidence that
atomic masses have discrete values.

In 1913 Thomson discovered parabolas corresponding to A = 20 and A = 22 when neon occupied the
discharge tube. The atomic weight of neon is 20.2. It appeared therefore that the denser parabola, at
A = 20, belonged to neon. Thomson inclined to think that the lighter parabola pointed to an unknown
element, for which, however, no place existed in the periodic table. Diffusion through tobacco pipe
gave a partial separation of the substance with A = 22; but its spectrum turned out to be identical with
neon's. As Thomson unenthusiastically suggested, he was dealing with two substances different in
atomic weight but indistinguishable chemically and physically. He had found the first example of
isotopes other than radioactive bodies.

X and gamma rays

The facts that X rays could not be deflected by electric and magnetic forces, but also could not be
reflected or refracted, made their status uncertain. The first set of properties marked them as waves in
the ether; the second as no waves at all. The majority view in 1900 considered them to be aperiodic
bursts of electromagnetic energy created in the rapid deceleration of cathode rays in the discharge
tube. This view was altered by the work of Charles Glover Barkla (1877-1944), who had made the
measurements of x-ray scattering at the Cavendish that Thomson used to estimate the size of n. In
1906 Barkla began to study the secondary x rays reflected from metals struck by "primary" x rays
straight from the discharge tube. By examining the ionization caused by the secondary radiation after
passing through a standard thickness of aluminum, Barkla could order the metals by their radiation's
penetrating power. The ordering followed that of atomic weights, the heavier giving rise to the more
penetrating. Moreover, contrary to the primary pulse, the secondary radiation was homogeneous.
Further investigation with the help of a student, C.A. Sadler, showed that the primary had to have a
component at least as hard as the characteristic secondary to stimulate it and that these "fluorescent x
rays" themselves were not homogeneous. In 1909 Barkla and Sadler repeated Rutherford's feat with
Becquerel rays and also his nomenclature: the characteristic or fluorescent secondaries came in a
softer (A) and harder (B) form. In 1911 Barkla changed the labels to the now familiar L and K to
leave alphabetical space for radiations harder than K and softer than L.

The way to waves was not cleared by these important detections since in other respects x rays
behaved as if constituted of material particles of the kind that Einstein had proposed in 1905 in his
"heuristic" theory about light. He had called attention to the peculiarity of ultra-violet light, which
had an unequivocal claim to the status of electromagnetic wave but seemed to be able to surrender all

47
its energy to a single electron in the photo-effect. He had made further use of this hypothesis in 1907,
in elucidating the energy fluctuations in a space occupied by blackbody radiation. X rays and a
fortiori the more penetrating gamma rays appeared to some observers to express the particulate
character more strikingly than Einstein's photons. For example, a corpuscle knocked out of an atom
by an x ray could have a kinetic energy almost equal to that of the cathode-ray particle that had
produced the x ray. This phenomenon impressed itself particularly on William Henry Bragg (1862-
1942), professor of physics at the University of Leeds, who, with the British knack for mechanical
analogies, likened it to a rock's falling into a lake and creating a spreading wave; which, when a
portion of its front strikes a rock identical to the first one, brings together all its energy and throws the
second rock to the height from which the first descended. The absurdity spoke for itself. Bragg
concluded that x (and gamma) rays behaved like particles.

But not always. Desultory efforts to diffract x rays at the apex of a wedge had given no conclusive
results when, in 1912, at the suggestion of Planck's former student Max von Laue (1879-1960), two
experimentalists in Röntgen's institute at the University of Munich, Walter Friedrich (1883-1960) and
P.P. Knipping (1883-1935), substituted a crystal for the wedge. With their cultivated technique, a
diffraction pattern ("Laue spots") registered on a photographic plate behind the crystal. The occasion
for the experiment, which could have been done any time during the preceding decade, was
apparently a confluence of Laue's expertise in diffraction theory, his access to Röntgen's students and
apparatus, a strong interest in crystal lattices among the mineralogists at Munich, and a then recent
recalculation of the results of the old wedge experiments by Munich's professor of theoretical physics,
Arnold Sommerfeld (1868-1951).

This happy outcome did not convince Bragg, who interpreted the Laue spots as the records of
particulate x rays that rolled along paths in the crystal defined by its lattice. He changed his mind
under the influence of his son, William Lawrence Bragg (1890-1971), then a student at the
Cavendish, who thought to explain the spots as consequences of the interference of waves reflected
from the crystal planes. The idea had come to him during lectures by Thomson on the pulse theory of
x rays. C.T.R. Wilson, still at Cambridge, then suggested that the atom-rich cleavage planes of
crystals might produce interference effects by specular reflection. Lawrence left Cambridge to work
on the idea with his father at Leeds. They quickly showed that Barkla's homogeneous fluorescent x
rays had a characteristic wavelength λ in accordance with the time honored rule nλ = dsinθ, where θ
is the angle of specular reflection, d the distance between reflectors (in this case crystal planes parallel
to the surface), and nλ the wavelength difference creating the interference.

48
Laue and the Braggs appeared to settle the question of the nature of x rays. The Nobel prizes of 1914
and 1915 certified their work. So did the success of those who applied the newly understood x rays
with their now measurable wavelengths to the study of atomic structure. Physicists still had much to
learn about these puzzling products of the cathode-ray tube.

Sources

Abraham, Henri, and Paul Langevin, eds. Ions, électrons, corpuscles. 2 vols. Paris: Gauthier-Villars, 1905.

Congrès International de Physique réuni à Paris en 1900. Rapports. 4 vols. Paris: Gauthier-Villars, 1900-01.

Crookes, William. "On the illumination of lines of molecular pressure, and the trajectory of molecules." Royal
Society of London, Philosophical transactions, 170 (1879), 135-64.

Goldstein, Eugen. "Vorlaüfige Mitteilungen über elektrische Entladungen in verdünnten Gasen." Akademie
der Wissenschaften, Berlin. Monatsbericht, 1876, 279-95.

Goldstein, Eugen. "Über die Entladung der Elektrizität in verdünnten Gasen." Akademie der Wissenschaften,
Berlin. Monatsbericht, 1880, 82-124.

Hertz, Heinrich. Schriften vermischten Inhalts. Leipzig: Barth, 1895.

Lenard, Philipp. Wissenschaftliche Abhandlungen. 3 vols. Leipzig: Hirzel, 1944.

Lorentz, H.A. Collected papers. 9 vols. The Hague: Nijhoff, 1935-39.

Perrin, Jean. Oeuvres scientifiques. Paris: CNRS, 1950.

Romer, Alfred, ed. The discovery of radioactivity and transmutation. New York: Dover, 1964.

Rutherford, Ernest. Collected papers. 3 vols. London: George Allen and Unwin, 1962-65.

Schuster, Arthur. The progress of physics 1875-1908. Cambridge: Cambridge Univ. P., 1911.

Thomson, J.J. "Cathode rays." Philosophical magazine, 44 (1897), 293-316.

Thomson, J.J. Rays of positive electricity. London: Longmans Green, 1921.

Zeeman, Pieter. Verhandlingen over magneto-optische Verschijnselen. Leyden: E. Ijdo, 1921.

49
Studies

Buchwald, Jed, and Andrew Warwick, eds. Histories of the electron. The birth of microphysics. Cambridge:
MIT Press, 2001.

Bunge, Mario, and W.R. Shea, eds. Rutherford and physics at the turn of the century. New York: Science
History, 1979.

Carazza, Bruno, and Helge Kragh. "Augusto Righi's magnetic rays." Historical studies in the physical and
biological sciences, 21:1 (1990), 1-28.

Dahl, Per F. Flash of the cathode rays. Bristol: Institute of Physics, 1997.

Darrigol, Olivier. Electrodynamics from Ampère to Einstein. Oxford: Oxford Univ. P., 2000.

Dolby, R.G.A. "Debates over the theory of solution: A study of dissent in physical chemistry." Historical
studies in the physical and biological sciences, 7 (1976), 297-404.

Falconer, Isobel. "J.J. Thomson's work on positive rays." Historical studies in the physical and biological
sciences, 18:2 (1988), 265-310.

Feffer, S.J. "Arthur Schuster, J.J. Thomson, and the discovery of the electron." Historical studies in the
physical and biological sciences, 20 (1989), 39-61.

Fölsing, Albrecht. Heinrich Hertz. Hamburg: Hoffmann and Campe, 1997.

Fölsing, Albrecht. Wilhelm Conrad Röntgen. Munich: Hauser, 1995.

Heilbron, J.L. "The scattering of a and b particles and Rutherford's atom." Archive for history of exact science,
4 (1968), 247-307.

Maley, T.E., and W.C. Brown, eds. History of vacuum science and technology. New York: American Institute
of Physics, 1984.

Nye, Mary Jo. "N-rays: An episode in the history and psychology of science." Historical studies in the physical
and biological sciences, 11:1 (1980), 125-56.

Quinn, Susan. Marie Curie. New York: Simon and Schuster, 1995.

Thomson, J.J. Recollections and reflections. New York: Macmillan, 1937.

Wheaton, Bruce R. The tiger and the shark. Empirical roots of wave-particle dualism. Cambridge: Cambridge
Univ. P., 1983.

Wien, Wilhelm. Aus dem Leben und Wirken eines Physikers. Leipzig: Barth, 1930.

Wilson, David. Rutherford: Simple genius. London: Priority Books, 1973.

50
51
III. Atomic Structure

The structure of atoms was neither a popular nor a productive subject when Joseph John
Thomson (1856-1940) conjectured in 1897 that they consisted entirely of corpuscles – the
negative subatomic particles that he and others had then just identified as the agent of
cathode rays. He would soon show that the corpuscle's apparent mass was but a
thousandth of that of the hydrogen atom mH. From this value and his basic assumption that
the corpuscles carried all the atomic mass AmH (A the atomic weight relative to hydrogen)
he made the number n of corpuscles in an atom about 1000A. Thomson's ideas guided
model makers through the quantization of the atom by Niels Bohr (1885-1962) in 1912-13.

1. An Almost Fresh Subject

By the early 1890s several lines of investigation had indicated the composite structure of the
chemical atom. One was the large number of chemical elements, which to many seemed too
many. Chemists continued to toy with the hypothesis introduced in 1815 by the English
chemist William Prout (1785-1850), who suggested that the elements might be compounded
from hydrogen or another "protyle." Although as values of atomic weights became more
precise Prout's hypothesis no longer fit the facts, the invention of the periodic table around
1870 revived the conviction that the elements must have something in common.

A second line of investigation favoring a complex atom derived from a central problem of
19th century physical science: the relationship between electricity and matter. In the most
sophisticated and orthodox theories of Maxwell's type, electricity acted as if it were an
incompressible fluid and electric "charge" merely marked the places where lines of force in
the fluid stopped or started on material bodies. In the most naive and convenient
Continental theories, electrical charge was a droplet of electrical fluid acquired in the
separation of positive and negative electricities. The question came to a head in the 1880s
over interpretation of experiments in electrolysis. As Hermann von Helmholtz (1821-94)
declared in an influential talk in 1881, electrolysis showed that if the atomic theory held for
either matter or for electricity, it held for the other too.

In his accounts of electrolysis, Michael Faraday (1791-1867) had introduced the terms "ion"
and "electrode" to indicate that something moved in an electrolytic fluid to and through the
electrified plates dipped in it. Models of the phenomena pictured the ions as charged atoms
or molecules torn from neutral ones by the action of the electrodes. In the 1880s Svante
Arrhenius (1858-1927), soon to be the leader of the new science of physical chemistry,

52
argued that some solute particles existed as ions in the fluid even in the absence of
electrodes or currents. Dissociation accounted for many disparate phenomena besides the
conductivity of solutions, e.g., change in freezing point with concentration of the solute and
the nature of osmotic pressure.

Physicists took over the concept of ion (without the dissociation hypothesis) to account for
the phenomena of gas discharges. Much of the research at the Cavendish Laboratory under
Thomson in the late 1880s and early 1890s concerned ions in fluids and gases. He
understood the relationship between electricity and matter usually in Maxwell's, but
sometimes in Continental, terms. There was no agreed picture of the relation between an
atom or molecule and its ion. A molecule might split into a positive and a negative part.
How could an atom do so?

The few pertinent models then available were very general and conjectural. H.A. Lorentz
(1860-1928) at Leyden and Thomson's colleague Joseph Larmor (1857-1942) at Cambridge
had devised atoms constructed of "ions" (Lorentz) and positive and negative "electrons"
(Larmor). Both schemes, which achieved definitive forms in 1895, could picture the
electrolytic or gaseous ion as an atom or molecule that had lost or gained an electron or an
ion; but since their basic particles had a charge-to-mass ratio e/m about that of the
electrolytic hydrogen atom, neither had a convenient representation of the relationship
between an atom and its ion before the discovery of the corpuscle.

A third set of investigations supposing some internal construction of atoms concerned


radiation – the emission of spectral lines and the anomalous dispersion of light. The
identification of the atom as the source of the rich line spectra of the elements instigated a
search for the spectral radiator associated with it. Theory (whether elastic-solid or
electromagnetic) required that the radiator have parts capable of different oscillations; not
necessarily identical parts, as in Prout's hypothesis, or pieces capable of independent
existence, as in Thomson's corpuscular theory, but perhaps, under some circumstances, both
identical and self-standing.

In an isolated but critically important case a theory of atomic structure provided an


explanation, and even a prediction, of an interaction between light, electricity, and matter.
The theory was Lorentz'; the interaction, a splitting of spectral lines in a magnetic field
demonstrated by Lorentz's former student Pieter Zeeman (1865-1943), produced the
unexpected results that the charge to mass ratio of the Lorentz ion was about 107 emu and
that only negative ions gave rise to spectra. No trace of the necessary positive counterpart
appeared.

53
Another difficulty with Lorentz' theory -- that it could not account for "anomalous" splitting
into more than three components -- were shelved in the general enthusiasm over the
simplification and unity that the corpuscle/electron brought into physics. It at once
explained the relationship between an atom and its ion, provided a spectral radiator and the
agent of dispersion, confirmed the asymmetry of charge (corpuscles were negative
irrespective of their source), and could easily assume the central role in Prout's hypothesis.
By 1900 many physicists inclined to accept, or had accepted, the identification of the
corpuscle with the cathode-ray particle and the Lorentz-Zeeman ion, and consequently
Thomson's general ieas about atomic structure and the relationship of atoms to ions.

To take a further step into the atom required a bold mind. British physicists pointed to two
major obstacles before anyone had tried to go beyond Lorentz. In his profound book Aether
and matter (1900), Larmor observed that nothing about the corpuscle/electron or
electrodynamics or mechanics fixed the size of atoms. How could the theorist determine the
only configuration that produced the observed spectrum, which showed by its sharpness
and consistency that the subatomic vibrations from which it arose took place under
absolutely rigid constraints? Moreover, said Lord Rayleigh (1842-1919), also in 1900, even
with rigid mechanical constraints atoms constructed on the principles of received physics
might not be able to account for the observed spectra. The formulas for the main spectral
series of the elements did not have the form of frequencies of mechanical systems.

The most immediate question, however, was the nature of the atom's positive charge. In
1899 Thomson offered the British Association for the Advancement of Science an obscure,
but ingenious, solution: "when [the corpuscles] are assembled in a neutral atom their
negative effect is balanced by something which causes the sphere through which the
corpuscles are spread to act as if it had a charge of weightless positive electricity equal in
amount to the sum of the negative charges on the corpuscles." Thomson ascribed no weight
to the positive component and so, like Larmor, required thousands of electrons to make up
the atomic weight. These large numbers had the merit, as Larmor pointed out, of mitigating
a severe difficulty in the production of spectral lines by oscillations of charged particles.
Since no system of charged particles interacting by Coulomb's law can be stable if the
particles are at rest among themselves, the atomic electrons or corpuscles had to orbit, and
hence accelerate and radiate; the consequent loss of energy would oblige them eventually to
fall into the center of the centripetal force that retained them.

Larmor pointed out that if the accelerations of all the electrons summed to zero, and if the
radii of their orbits were small compared with the wavelength of the waves they emitted,

54
then their total radiation would be very much smaller than that lost by single electrons in
the same orbits. Thomson later showed that if many corpuscles circulated evenly spaced in
concentric rings, the loss could be negligible. In Larmor's model, the motions responsible
for the line spectra were oscillations of electrons around their equilibrium orbits. Since each
electron has three degrees of freedom, n of them could produce 3n spectral lines. That was
another advantage of a full atom, for the known lines of the heavier elements ran into the
thousands.

2. Tentative Structures

Physicists were bold and clever enough to devise three different pictures of the insides of
atoms during the first years of the 20th century. James Jeans (1877-1946), continuing in
Larmor's line, populated the atom with equal numbers of positive and negative electrons
interacting by non-Coulomb forces; Jean Perrin (1870-1942), taking his inspiration from the
solar system, supposed electrons in motion in rings around a central positive sun (or, better,
a central Saturn); Lord Kelvin (1824-1907), materializing Thomson's notion of a space that
acted as a neutralizing retainer, placed electrons within a diffuse sphere of positive
electricity.

Because no evidence for a positive electron came to light, the Larmor-Jeans approach was
not developed. The Saturnian atom fared only slightly better. In 1904 Hantaro Nagaoka
(1865-1950), a professor of physics at the University of Tokyo, published computations of
the frequencies of oscillations of the electrons in the Saturnian rings around their
equilibrium orbits. He concluded that the vibrations could be the source of spectral lines
and suggested that radioactivity occurred when the oscillations grew large enough to pull
the atom apart. G.A. Schott, a former student of J.J. Thomson's who became professor at the
University of Wales, countered that all the modes of oscillation in the plane(s) of the motion
are unstable unless the charge on Saturn exceeds by far the sum of the charges of the
electrons in the rings: either the system fell apart or made matter excessively positive.
Either way it made a poor model for the atom.

The only model that underwent progressive development before 1910 was Kelvin's.
Thomson took it up in 1904. Like Nagaoka and at the same time, he calculated the
frequencies of the perturbed oscillations of the electrons in a single-ring atom as a function
of their number p. He hoped to learn something about atomic structure from the value of p
at which mechanical instability set in. It turned out to be six. Thomson learned that more
electrons could be added to a single ring if one or more electrons were placed inside it. For
the special case p = 20, stability could be achieved if the number of internal electrons q had

55
any value between 39 and 47. These nine configurations offered an analogy to the
(unfortunately only eight) elements in the second and third periods of Mendeleev's table.
For if q were close to 39, the atom might tend to shed an outer electron, and so act
electropositively; but if q were near the maximum, the atom might tend to gain an electron
and behave electronegatively.

Thomson's analogy introduced the fundamental idea that atoms of successive elements in
the periodic table differ from one another by the addition of a single electron. As presented,
however, the idea could make little sense to the chemist. As Augusto Righi (1850-1920)
observed, it required elements to change their species without a discernible change in
weight; it gave no purchase for a qualitative distinction between ionization and
transmutation, both of which can result from the model from the gain or loss of a superficial
electron. Again, Thomson's analogy works with the reverse of the modern assignment of
roles to core and valance electrons. The atoms of each period have the same number of
external electrons and differ only in the population of their inner rings. Chemical and
optical properties consequently derive primarily from the deeper-lying electrons. Likewise
all the electrons in an atom and not just the deepest suffer the slow radiation loss that,
according to the model, eventuates in radioactivity.

Nonetheless, the model allowed Thomson to count atomic electrons by computing their
contributions to the scattering of various kinds of radiation. The outermost electrons, he
supposed, would disperse light of optical frequencies; old measurements of dispersion in
hydrogen, when compared with his formulas, made n ≈ A. X and beta rays should agitate
deep-lying electrons. From experiments by his former student C.G. Barkla (1877-1944) and
by Rutherford, Thomson made the number of x-ray scatterers about 0.2A. By 1906 he had
deduced the capital result that the number of electrons in an atom is of the order of its
atomic weight.

In 1907 Thomson summarized his investigation of electronic atoms in a book, The corpuscular
theory of matter, which aroused great interest even on the Continent. A German translation
was published; Lorentz regarded the plenary model favorably; Arnold Sommerfeld's (1868-
1951) physics colloquium at Munich studied it; and Max Born (1882-1970) extolled the
elucidation of the periodic table in his Habilitation lecture at Göttingen in 1909. The editor
of the Journal de chimie physique asked for an account of Thomson's ideas about chemical
combination adapted to the capacity of his readers.

The unexpected fall in n confirmed the asymmetry between negative and positive electricity
and rescued positive charge from its ghostly status. It had to carry most of the atomic

56
weight. To discover what it might be, Thomson embarked on the study of canal rays, with
the results reported under "Ray Physics." To learn more about atomic structure, he devised
a new theory of beta rays scattering based on the assumption that the observed deviation of
a collimated beam arose from many successive small deflections of individual beta particles.
Experiments done at the Cavendish agreed with the theory if n ≈ 3A.

Rutherford had reason to doubt both the experiments and the theory that anchored the
result n ≈ 3A. Neither older experiments on beta rays nor data then (1910) just obtained on
alpha rays, both at his institute at the University of Manchester, fit Thomson's model or
analysis. The most striking piece of data, brought to light by Hans Geiger (1882-1945) and
Ernest Marsden (1889-1970) in 1909, was that about one in 8000 alpha particles striking a
thin gold foil reemerged on the side of incidence. To analyze this result in Thomson's
manner, Rutherford had to suppose that an alpha particle, to which Thomson assigned
some ten electrons, behaved as a point charge. But that was to suppose that the helium
atom minus two electrons occupies a space very much smaller than an atom. The nuclear
atom was not so much the outcome of Rutherford's analysis of the scattering of alpha
particles as its main ingredient.

Rutherford's theory differed from Thomson's not only in introducing the nucleus as the
major scattering center but also in ascribing deviations acquired in traversing thin metal
foils to a single encounter with a single nucleus: even a particle whose path is shifted by a
considerable angle almost certainly received its total deflection in one stroke. As Rutherford
showed by a probabilistic analysis fashioned after Thomson's latest scattering theory,
multiple encounters with diffuse spheres could not deflect alpha particles in the number
observed. Nor could electrons acting individually, even if fixed rigidly in the spheres,
unless a gold atom had 40,000 of them. That was of course far to large a population after
Thomson's work of 1906. Rutherford showed that an acceptable value of nAu could be
obtained if the charge of the scattering center was ne rather than e, if the single kick came
from a nucleus instead of from an electron.

Rutherford marshaled his arguments in favor of the nuclear model and the relation A/2 < n
< A in a brilliant paper published in 1911. It did not make a sensation. Rutherford did not
say how his model evaded the fatal instability that had annihilated Nagaoka's. Nor did he
bring it into touch with the preoccupations of atom builders: periodic properties of the
elements, spectral regularities, radioactivity. Nor, in his paper of 1911, did Rutherford
declare explicitly the most obvious and important extrapolation from his results: since
helium has two electrons and since n < A, hydrogen has but one and, in general, the charge
on a nucleus equals the number of its element in the periodic table.

57
3. Radioactivity and the Elements

Making room at the table

In 1895, on the eve of Becquerel's discovery of radioactivity, chemists recognized over sixty
elements exclusive of the rare earths, and among the earths anywhere from 6 to 9. In order
to place the elements in the table by weight horizontally and by chemical properties
vertically, they customarily left places between molybdenum and rhodium, and between
tungsten and osmium; five spaces between bismuth and thorium, and one between thorium
and uranium; and an uncertain number among the rare earths. Of the 90 elements now
known to exist between hydrogen and uranium, 67 were then recognized by everyone, three
more in the rare earths by many, and eight in absentia by specialists in the periodic table.

Between 1895 and 1900 two new sorts of substances came to light that claimed some of the
spaces and opened new ones. The latter sort, the rare gases, made their first appearance in
argon in 1894. William Ramsay(1852-1916) and Lord Rayleigh separately determined its
density at about 20 times that of hydrogen gas. Its atomic weight came out a little under 40.
That put it just after potassium (A = 39.1), where it destroyed the periodicity of the periodic
table.

In 1895 Ramsay found helium, an element previously only known as the source of lines in
the solar spectrum, in terrestrial sources and calculated its weight at about 4, ensuring it a
place between hydrogen and lithium (A = 7). In 1898, having procured means to liquefy
atmospheric gases, he found "neon" (A = 20) and two heavier and rarer family members,
krypton (84) and xenon (131). All except argon fit into place perfectly between the halides
and the alkalis.

Soon discrepancies similar to calcium/argon involving iodine/tellurium and cobalt/nickel


were accepted as true anomalies and not errors of measurement. These anomalies,
uncertainty about the number of empty spaces, and dispute over fundamental principles of
classification had by 1905 all but destroyed the utility of the periodic table as a guide to
discovery.

The other sort of new element was radioactive. Over 20 disintegration products were
known by 1906, over 30 by 1910, over 40 by 1914. A conspicuous example of this increase
was the discovery between 1905 and 1908 by Otto Hahn (1879-1968) of three radioactive
products between thorium and thorium X, on whose presumed immediate genetic tie

58
Rutherford and Frederick Soddy (1877-1956) had based their revolutionary concept of
disintegration. Hahn found the first of these unwelcome intruders, radiothorium (RaTh), in
1905 while working in Ramsay's laboratory in London, and two "mesothroiums" between
Th and RaTh while working with Rutherford in Montreal. The head of the thorium series,
thus established in 1908, stood without alteration in 1911:
Th (α) -> MesoTh1 (rayless) -> MesoTh2 (β) -> RaTh (α) -> ThX -> ThEm (α) ->
The mistaken classification of MesoTh1 as rayless (it gives off weak betas) obscured the
regularities soon to be expressed in the radioactive displacement laws.

In parallel with the enrichment of thorium went the discovery of links between uranium
and radium. An international hunt for the immediate parent of radium ended in 1907 when
Rutherford's great friend B.B. Boltwood (1870-1927), professor of chemistry at Yale
University, found "ionium," which had sheltered in its very close chemical similarity to
thorium. The head of the uranium series, thus established in 1907, stood unaltered in 1911:
U (α) -> UX (β) -> ? -> ionium (α) -> Ra (α) -> RaEm (α) -> active deposit

A most important clue to the incorporation of the radioelements into the table was the
decisive proof by Rutherford that the alpha particle carries two charges (which astonished
many) and that, when restored its missing electrons, it is a helium atom (which surprised no
one). Having fixed A at 4, radiochemists could calculate the atomic weights of decay
α

products provided that, as experience increasingly confirmed, just one alpha or one beta
particle is expelled in each radioactive transaction. The calculator also needed reliable
decay schemes and knowledge of weights of a few elements in the series. Such information
was available in 1911 only for the uranium chain: for uranium and for radium, which Marie
Curie extracted in sufficient quantities to fix its weight at 226 in 1907. Weights in the
thorium chain could not be determined. Nonetheless, no matter how many alpha particles
intervened between Th and ThEm, the emanations of thorium and radium could not have
the same weight. As for the third chain, beginning with actinium, no member had been
isolated in anything near to a weighable amount.

To integrate further the products of radioactive decay into the periodic table, chemists
determined the character of their phantom radioelements by the chemical company they
kept. In this way Willy Markward of Berlin showed that polonium had to stand in the
unoccupied space next to bismuth and above tellurium. Similarly radium emanation had to
come above xenon.

Gradually chemists realized that radioelements similar to the same reagent might be
chemically indistinguishable from one another. In 1907 Soddy listed the pairs

59
MesoTh1/ThX, Th/RaTh, and Th/ionium as inseparable by known means. Bruno Keetman
of Vienna tried everything he could to split the sizable amount of ionium he had prepared
from the lusty grip of thorium, but to no avail. In 1909 The Svedberg and his colleague
Daniel Strömholm at Uppsala drew up a periodic table in which the emanations from
thorium, radium and actinium occupied a single space above xenon, while Ra/AcX/ThX
stood together above barium. Then the heroic labors of several chemists to split
MesoTh1/Ra/ThX, and to separate radium D, a constituent of radium active deposit, from
lead, failed. In 1911 Soddy blamed the failure on nature: the inseparable elements were
chemically identical.

Soddy stressed that the inseparables included members differing appreciably in atomic
weight. Computing from the known alpha emissions in the decay chains, he found two
units between the weights of successive members of the triads ThX/Ra/Th and
RaTh/ionium/Th. In 1911 he stated that the expulsion of an alpha particle pushed the
emitter two places to its left in the table and showed the utility of the rule by reinterpreting
the disintegration of uranium. It had appeared to give off two alpha particles. Soddy
observed that the rule of one-particle emission could be saved if "uranium" consisted of two
inseparable components UI and UII, and if the decay proceeded as
UI (α) -> UX (β) ->[?( β)] -> UXII (α) -> ionium (α) -> radium (α),
bringing the head of the uranium series into full agreement with that of thorium.

During 1913 radioachemists cleared up many details given importance by Soddy's


displacement law and found the analogue for beta emission: alpha loss drives a decay series
two places to the left, beta loss one place to the right. With these rules and the oxymoronic
"chemically inseparable elements," for which Soddy coined the word "isotopes," the 40
radioelements known in 1913 fit into ten spaces from thallium to uranium inclusive. Half
the spaces contained isotopes of elements known in 1895: Tl, Pb, Bi, Th, U; the other half,
corresponding to five of the seven blanks between Bi and U usually provided in tables
drawn up after 1900, were filled or overfilled with forms of elements discovered from their
radioactivity alone.

Soddy's insertion of the radioelements into the periodic table was a synthesis of his theories
with experiments by his assistant Alexander Fleck and with several displacement rules
proposed in 1912 and 1913 by former co-workers in Rutherford's laboratory in Manchester,
notably Georg von Hevesy (1885-1966) and Kasimir Fajans (1887-1975). Fajans' system
required the existence of a second beta emitter (UX2) in uranium, as well as beta emission
from both mesothoriums. He discovered the short-lived UX2 (brevium) himself. As Soddy
observed, brevium had the unique feature of being the only radioelement known by a single

60
isotope. He rightly anticipated that it soon would be joined by the parent of actinium. In
1918 Otto Hahn and Lise Meitner (1878-1968) announced the separation of proto-actinium
from pitchblende, which thus yielded up the last of its elements and plugged the hole
between thorium and uranium.

The great advance in understanding represented by the concepts of atomic disintegration,


radioactive displacement, and isotopy was gained without reliance on detailed atomic
models. The utility of Rutherford's atom for representing chemical identity with radioactive
diversity quickly made isotopy and the nuclear model as inseparable as radium and
thorium X.

4. Quantizing the Atom

The quantum theory became an ingredient in atomic models around 1910. Until then it had
usually been expressed, as it had been created, by modeling matter as a collection of
harmonic oscillators, as in Einstein's theory of the specific heats of solid bodies (1907). The
impulse his work gave to experiments on specific heats at low temperatures resulted in the
famous Solvay council of 1911. Among the topics considered there was the relationship of
quantum theory to atomic structure.

Thomson's model seemed made for this development since for one electron it acted as a
simple harmonic oscillator, the electron vibrating along a diameter rather than revolving
around the center. This line of thought was opened before the Solvay council met by Arthur
Erich Haas (1884-1941), then a student at the University of Vienna. Reports of the meeting
caused J.W. Nicholson (1881-1955), a mathematical physicist trained at Cambridge, to work
the quantum into a version of the nuclear atom that he had constructed to account for
certain stellar spectra. Nicholson's results in turn caused Niels Bohr to consider how his
quantized version of Rutherford's atom could be made to radiate.

The first quantized atoms

In 1910 Haas observed that the frequency of oscillation of oscillation f of an electron in a


Thomson sphere is independent of the amplitude: f2 = e2/4π2ma3, a the radius of the sphere.
In the spirit of the quantum theory, he set f = W/h, W being an energy, for which he
proposed hf = e2/a. He therefore had an electromagnetic expression for h, h = 2πe√ma, or,
supposing f known and eliminating a to find a formula for e, e5 = (h3f/4π2)(e/m). Construing
his model as a hydrogen atom, he obtained e = 3.2x10-10 esu, in excellent agreement with the
incorrect value then still favored at the Cavendish. Lorentz defended Haas' work at the

61
Solvay council against Max Planck (1858-1947), who did not see the strength of the analogy
to his resonators.

Neither did Nicholson, who, however, took from the Solvay reports the inspiration to fit
quantum ideas to the extraordinarily successful single-ring Saturnian model he had been
investigating. He had assigned a ring with three electrons to hydrogen, and ones with two,
four, and five electrons to "proto-elements" responsible for the emission of unassigned lines
in solar and nebular spectra. The frequencies of oscillation of electrons perpendicular to
their ring are stable for the populations Nicholson studied. Nine of eleven unassigned
nebular lines agreed with the perpendicular oscillations of "nebulium" (n = 4) and the
strongest coronal lines fit those of "proto-fluorine" (n = 5). Years later astrophysicists
showed that all Nicholson's lines belonged to multiply ionized forms of iron and other
common terrestrial elements.

Besides the number of electrons in the rings, Nicholson had a fee parameter in the frequency
of the ring's unperturbed rotation. Following Haas' example, he computed E/f, where E is
the negative of the total energy of the model atom, and also the total angular momentum.
They turned out to be integral multiples of h and of h/2π, respectively. Hence Nicholson's
quantized atom gave off different lines according to how far its internal angular momentum
had run down by various discrete amounts from a standard value. And, as he told the
British Asssociation in 1913, the standard value, prescribed by the quantum theory, would
fix the size of atoms.

Bohr's atom

Niels Bohr began the quantization of the nuclear atom in June 1912. He had gone to
England in 1911 to work with Thomson; but he did not establish the close rapport on which
he had counted and moved to Rutherford's laboratory in Manchester. There he tried to
refine the theory of the passage of alpha rays through matter. He soon found out for
himself that oscillations of Saturnian electrons in the plane of their motion are not stable.
How then calculate the response of an atom to an alpha particle passing through it? Bohr's
answer was to grant special status to certain orbits in what he thought was the spirit of
quantum theory. Electrons circulating in an orbit with a kinetic energy T and frequency f
would not have to radiate or to respond to minor perturbations if T/f = K. T appears rater
than the total energy W (= -T) to avoid negative numbers; and the undefined constant K
rather than Planck's h to take into account that the nuclear atom is not a simple harmonic
oscillator.

62
Bohr's chief research line from June 1912 to February 1913 was the exploitation of his
stability condition to solve the problems addressed in Thomson's theory of atomic structure.
Although he failed to find an explanation of periodicity, he made good use of an erroneous
result of his calculations -- that an electron added to an atom with a saturated ring or rings
goes outside, not inside, the existing structure. With this proposition and Rutherford's
approximation n ≈ Α/2, Bohr could go far beyond Thomson and state precisely the number
of electrons in any normal atom. Since the nuclear model requires nHe = 2, it followed that
nH = 1, nLi = 3, and so on, each neutral atom containing a number of orbiting electrons equal
to the number Z of its element in the periodic table, counting from hydrogen as one.

By the end of 1912 Bohr had met Nicholson's model, which, like his, was quantized and
nuclear. But Nicholson's atoms radiated and Bohr's did not; and nebulium and
protofluorine had the same electronic populations as the doctrine of atomic number
demanded of beryllium and boron. While trying to understand the connection between his
model and Nicholson's, Bohr probably introduced the idea of excited states satisfying
(T/ν)n= Kn, where n numbered the state; which would have prepared him to respond to a
challenge from his colleague H.M. Hansen (1886-1956), Copenhagen's expert on
spectroscopy, to explain the Balmer formula.

In this formula, νn = R(1/4 – 1/n2), frequency appears as a difference that could be


connected with an energy difference by multiplication by h. Thus Rh/n2 would represent
energy, possibly the energy of one of Nicholson's radiating states. Bohr had already worked
out that, if (T/ν)n = Kn and Tn = Ze2/2a, T = π2me4Z2/2K2. Equating Tn with Rh/n2 for
hydrogen (Z = 1), and writing K = αn to kill the n's (α a constant to be determined), Bohr
would have had (T/f)n = nh/2; from which he could compute Tn, reverse his argument, and
obtain R as a product of the atomic constants. He supplied two justifications of the odd
equation (T/f)n = nh/2 in his famous three-part paper on atomic structure published in 1913.

The first, in analogy to Planck's procedure, set Tn = nhνn, in which the physical significance
of νn is not obvious. Bohr wanted νn = fn/2, and argued that it was the "average" frequency
of the light emitted by an external electron captured from rest (f = 0) into the nth orbit with
mechanical frequency fn. This ad hoc derivation underscored the difficulty of transforming
Planck's formula from the simple harmonic oscillator, for which the radiated and
mechanical frequencies coincide, to the nuclear atom, where they do not; it resulted in the
great invention or discovery that the radiated frequencies have only a formal relation to the
mechanical ones; and it placed the quantum condition on the radiation, assumed to be

63
produced during capture of an electron by an ion, rather than on the mechanical quantities
defining the orbit.

Bohr's second formulation, the one now generally remembered, read the fundamental
equations, Tn = hνn and νn = fn/2, as Tn = nhfn/2, that is, as a condition on the mechanical
quantities defining electron orbits in the special or "stationary" states n. Since T/f is just πp,
where p is the angular momentum, an alternative expression of the condition, in which all
trace of Planck's formulation, and hence of Bohr's path, has been lost, is pn = nh/2π. Thus
the familiar Bohr hydrogen atom, where radiation occurs when the electron moves from one
stationary state to another, no proceeding ionization being required; when the transition
occurs from state n to state 2, a line of the Balmer series results, νn = (T2 – Tn)/h = (2f2 – nfn)/2
= R(1/4 – 1/n2). The value of Tn in fundamental constants can be obtained from Tn = e2/2an,
the force balance e2/an2 = 4π2fn2an, and pn = 2πfnan2 = nh/2π: Tn = 2π2me4/n2h2. Identifying Rh/n2
with Tn, R = 2π2me4/h3.

At the end of 1913, in a lecture to the Danish Physical Society, Bohr presented a third
derivation that made a virtue of the disconnection between ν and f. In the transition from
the (n + 1)st to the nth state, an electron would emit a line of frequency νn+1,n = R(1/n2 –
1/(n+1)2) ≈ 2R/n3 when n>>1, that is, when the atomic electron is so weakly held that it
might be described as an ordinary electron. Bohr made it a principle – the "correspondence
principle" – that in this limit the frequency ν, computed by quantum jumps between
neighboring orbits, should be numerically equal to the mechanical frequency f of either
orbit. Now fn = 2Tn/nh = 4π2me4/n3h3 = νn+1,n = 2R/n3, whence, as before, R = 2π2me4/h3.

Spectroscopists found in Bohr's derivations a challenge to themselves and a crucial


experiment for him. They had ascribed to hydrogen various lines that fit the Balmer-like
formula ν = R(1/(1.5)2 – 1/n2), n = 2, 3... Bohr had no place for half-integer orbits. He boldly
rewrote the formula as ν = 4R(1/32 – 1/(2n)2) and attributed it to ionized helium: if the
nucleus has a charge of Ze, the term e4 in the expression for R must be replaced by Z2e4.
Alfred Fowler, an English spectroscopist enlisted by Rutherford, confirmed that the
problematic lines showed up in tubes containing helium and free of hydrogen; and he also
observed that the series formula never fit the lines Bohr attributed to helium as well as the
Balmer formula fit hydrogen's.

Bohr answered in October 1913 in a masterstroke that made theorists take him seriously. He
had neglected, he said, the small motion of the heavy nucleus in estimating the electron's

64
energy in the stationary states. Repairing this omission in the way taught in elementary
mechanics, which amounts to replacing m by m' = m/(1 + m/mZ), mZ the mass of the nucleus,
Bohr made the true hydrogen constant RH = (m'H/m)R. Theory required RHe+/RH = 4.00163.
Fowler looked, and reported RHe+/RH= 4.0016. The impression made by this extraordinary
confirmation may be gauged from Hevesy's description of Einstein's reaction to the news.
"When he heard this, he was extremely astonished and told me: 'Then the frequency of the
light does not depand at all on the frequency of the electron... And this is an enormous
achiewement. The theory of Bohr must be wright.'"

In his second and third papers on atomic constitution of 1913 Bohr used the stable
perpendicular modes to determine the populations of the rings of heavier atoms on the
assumption that in their ground states all electrons have one quantum of angular
momentum. But in assigning configurations to each element he was guided less by
calculation than by intuition; and the exercise must be regarded more as a plausibility
argument in Thomson's style than as a demonstration.

Confirmation

However problematic the ring populations, the Saturnian structure provided an atom with
sufficiently distinct parts to give a persuasive geometry of atomic activity. The outer
electron structure determined chemistry and visible radiation; the nucleus locked away
radioactivity; and in between, in the inner electron structure, X rays had their seat.

While Bohr labored on his masterpiece, Rutherford's prize student H.G.J. Moseley (1887-
1915) rushed to obtain the frequencies of the K and L lines using the new techniques of x-ray
interference. By November 1913 he could publish that the strongest of the K lines, K , inα

eleven metals from calcium through zinc, satisfied ν = (3/4)(Ζ − 1) R. The eleven included
Κα
2

the pair Co/Ni, whose lines followed Z, not A. In the deep recesses of the atom, where K
rays originate, there is no trace of the chemical or periodic properties of the elements.
Moseley took the doctrine of atomic number as demonstrated. He therefore reversed his
march, and used his surprisingly powerful formula and a similar one he found for L , νL =α α

2
(5/36)(Z – 7.4) R, to hunt for missing elements. His search confirmed the appropriateness of
the customary blanks for homologues of manganese (Z = 43, 75) and tidied up the chemical
jousting ground of the rare earths.

Moseley conceived that his formula for K , which can be written in the Balmer-like manner
α

ν = R(1/1 − 1/2 )(Ζ − 1) , confirmed Bohr's condition on the angular momentum in the ground
2 2 2

65
state. But neither he nor Bohr could find a way to derive it from Bohr's principles. In
October 1914 Walter Kossel (1888-1956) examined Moseley's formulas from a point of view
natural to a physicist in the x-radiated atmosphere of Munich. Kossel guessed that K arose
α

from transition of an electron from the second to the first ring. On his model, conservation
of energy required νK = νK + νL, where νK and νL represent the maximum frequencies or
α

edges of the K and L series. The formula fit the facts. By the time World War I put an end
to classical physics, characteristic x-ray spectra were regarded as important confirmation of
Bohr's ideas – even though no one ever found a derivation of the formulas from the ideas.

Then a serious difficulty appeared to jam the works. The minimum excitation potential (EP)
for the helium spectrum measured in Wien's laboratory at Würzburg stood 4 eV above
helium's ionization potential (IP) as determined by James Franck (1888-1964) and Gustav
Hertz (1887- 1975) in Berlin. Bohr required radiation to occur when a valence electron left
an excited bound state, so that EPmin<IP. He turned the threat of the Frank-Hertz experiment
to his advantage as deftly as he had the challenge of the spectroscopists over the helium
spectrum. He conjectured that the current that Franck and Hertz attributed to ionization in
fact arose from the photo-effect: light emitted in transitions in excited atoms drove electrons
from the negative detector, mimicking an incoming stream of positive ions. American
physicists confirmed his conjecture during the war.

A final unexpected bit of support for Bohr's theory came unwillingly from Johannes Stark
(1874-1957). Following as usual ideas peculiarly his own, he had subjected a beam of
hydrogen ions to a strong electric field between a perforated cathode and a more negative
electrode installed close behind it. No one else believed that anything of interest would
result; earlier analysis had shown that the vibrations of atoms would not be affected
detectably by electric fields of attainable strength. In November 1913 he announced that his
apparatus split the second and third Balmer lines into five components each.

Since classical physics did not provide for the Stark effect, several analysts rushed to try
quantum theory. Among them was Antonio Garbasso (1871-1933), professor of physics at
the University of Florence, whose interest had been aroused by the misfortune of his
assistant, Antonio Lo Surdo, who had noticed a broadening of lines in the spectrum of
hydrogen canal rays before Stark reported it, but who had neglected to publish his
observation. Garbasso and then Bohr developed different ways of saving the Stark effect on
Bohr's new principles. Both came within a factor of two of reproducing Stark's
measurements.

66
Many physicists did not care for the sacrifice demanded by Bohr's substitution of h for, as
Thomson put it, "a knowledge of the structure of the atom." Lorentz asked for a mechanical
account of Bohr's model. Wien opined that good physics demanded sticking to "the general
validity of the laws of theoretical physics even for atoms," and condemned Bohr's theory for
"saying absolutely nothing about the mechanics of radiation." Rutherford identified as the
single serious weakness in the theory its lack of a space-time description of radiation. He
wrote Bohr: "There appears to me to be one grave difficulty in your hypothesis, which I
have no doubt you fully realize, namely, how does an electron decide what frequency it is
going to vibrate at when it passes from one stationary state to another? It seems to me that
you would have to assume that the electron knows beforehand where it is going to stop."
Stark allowed himself to press the same objection. But this was to bellyache without
offering a remedy.

Perhaps E.P. Lewis, professor of physics at the University of California, best expressed the
interest and the reservations of informed observers. In a speech given as Vice President of
the American Association for the Advancement of Science, he said of Bohr's work and its
promise: "Hesitant as we may be to accept in all its details a theory which asks us to
abandon laws upon which we have pinned our faith, this theory, and the quantum theory as
well, may be the flashes of genius that reveal incompletely the outlines of the truth towards
which we struggle along a dimly lighted path." Bohr is the Ptolemy of this strange new
world. "Some day the Kepler and the Newton of the atom may appear."

Sources

Badash, Lawrence. Rutherford and Boltwood. Letters on radioactivity. New Haven: Yale Univ. P., 1969.

Bohr, Niels. Collected works. Vols. 1-2. Amsterdam: North Holland, 1972-81.

Curie, Marie. Oeuvres. Warsaw: Académie polonaise des sciences, 1954.

Hahn, Otto. A scientific authobiography. New York: Scribners, 1966.

Larmor, Joseph. Aether and matter. Cambridge: Cambridge Univ. P., 1900.

Meyer, Victor. "Probleme der Atomistik." Gesellschaft deutscher Naturforscher und Ärzte,
Verhandlungen, 67 (1895), 95-110.

Nagaoka, Hantaro. "Kinematics of a system of particles illustrating the line and band spectrum and
the phenomena of radioactivity." Philosophical magazine, 7 (1904), 445-55.

Romer, Alfred. Radiochemistry and the discovery of isotopes. New York: Dover, 1970.

67
Rutherford, Ernest. Collected papers. 3 vols. George Allen and Unwin, 1962-65.

Soddy, Frederick. Radioactivity and atomic theory...Facsimile reproduction of Annual progress reports on
radioactivity from 1904 to 1920. Ed. T.J. Trenn. London: Taylor and Francis, 1975.

Thomson, J.J. "On the structure of the atom." Philosophical magazine, 7 (1904), 237-65.

Thomson, J.J. "On the number of corpuscles in an atom." Philosophical magazine, 11 (1904), 769-81.

Thomson, J.J. The corpuscular theory of matter. London: Constable, 1907.

Studies

Brock, W.H. From protyle to proton. William Prout and the nature of matter. Bristol: Adam Hilger, 1985.

Heilbron, J.L. "The scattering of α and β particles and Rutherford's atom." Archive for history of exact
science, 4 (1968), 247-307.

Heilbron, J.L. H.G.J. Moseley, 1887-1915. The life and letters of an English physicist. Berkeley: Univ. of
California P., 1974.

Heilbron, J.L. Historical studies in the theory of atomic structure. New York: Arno, 1981.

Heilbron, J.L., and T.S. Kuhn. "The genesis of Bohr's atom." Historical studies in the physical and
biological sciences, 1 (1969), 211-90.

Hermann, Armin. Arthur Erich Haas. Der erste Quantenansatz für das Atom. Stuttgart: Battenberg,
1965.

Hoyer, Ulrich. "Über die Rolle der Stabilitätsbetrachtungen in der Entwicklung der Bohrschen
Atomtheorie." Archive for history of exact science, 10 (1973), 177-206.

Hentschel, Klaus. Mapping the spectrum. Oxford: Oxford Univ. P., 2002.

McCormmach, Russell. "The atomic theory of J.W. Nicholson." Archive for history of exact science, 2
(1966), 160-84.

Nye, Mary Jo. Molecular reality. A perspective on the scientific work of Jean Perrin. London: Macdonald,
1972.

Pais, Abraham. Niels Bohr's times. Oxford: Oxford Univ. P., 1991.

Rosenfeld, Léon. "Introduction." In Niels Bohr. On the constitution of atoms and molecules.
Copenhagen: Munksgaard, 1963. Pp. xi-liv.

68
Trenn, T.J. The self-splitting atom. A history of the Rutherford-Soddy collaboration. London: Taylor and
Francis, 1977.

69
IV. The Physicists' War

The victorious allied scientists liked to say, using the master metaphor of the time, that the war to end
all wars "forced science to the front," that is, to the top of the public agenda as well as to the
battlefield. Physicists did particularly well. Their contributions to many new machines and
technologies of war -- airplanes, telecommunications, precision gunnery, submarines and sonar, to
name but a few – had demonstrated that the military, government, and industry could not do without
them.

The battlefield consequences of the war work of physicists were felt most fully in the art of artillery.
Since trench warfare on the Western front soon settled into an artillery duel, improvements in gun
laying and target acquisition had high priority among general staffs and the agencies for mobilizing
scientists. Much of the reconnaissance work of the air forces created by both sides during the war
related to the requirements of the artillery; and the need to communicate the locations of enemy
emplacements gave a significant push to the improvement of radio. Physicists contributed
importantly to war in the air and the ether. They led the way in the war in the water, in the priority
project of detecting and destroying submarines.

Physicists did not begin the war in these occupations, however, and came to them only when shortage
of strategic materials and stalemate in the trenches drove military leaders and government officials to
seek their expertise.

1. To and From the Battlefield

Many young physicists in Austria, France, and Germany were mobilized just before or just after the
declaration of war in the active or reserve units to which they belonged. Many who for reasons of age
or health had no assignment volunteered. The institutes of physics in the universities emptied. In
France, the Ecole Normale Supérieure, the largest French nursery for physicists, was turned into a
hospital. Total enrollments in the French universities fell about 75 per cent in the first years of the
war. Specializing to physicists already established in their careers, the reported slaughter suggests
that they must have run to the battlefield in droves, around 200 (more than existed in all Franch
universities in 1900) if their casualty rate was the same as the German.

The better organized Germans and Austrians at first used their young physicists to little better effect
that did the French. About half of 160 members of academic physics institutes reported as serving in
the field by the Physikalische Zeitschrift went into technical combat arms, primarily artillery but also

70
communications and observation; between a quarter and a third began and perhaps stayed in the
infantry.

Although Britain did not mobilize as quickly as the continental powers, it squandered its scientific
manpower no less recklessly. The two ablest young experimental physicists in the country, William
Lawrence Bragg (1870-1971) and Henry Moseley (1887-1915), the one about to become a Nobel
laureate, the other a certain recipient had he lived, took their places with the troops. Trinity College,
Cambridge, long famous for its physicists and mathematicians, became a hospital. Almost all the
research staff of Britain's leading physical laboratory, the Cavendish, also at Cambridge, had either
left for the war by early 1915 or were vigorously training for it.

Moseley's death while operating a telephone in combat at Gallipoli became a symbol of the
squandering of scientific talent on the allied side. The Central Powers could point to Karl
Schwarzchild (1873-1916), Geheimrat, Director of the great Potsdam observatory, contributor to
general relativity and the quantum theory of the atom, who died of a disease contracted while serving
in an artillery unit on the Russian front; and to Friedrich Hasenöhrl (1875-1915), Austria's best
theoretical physicist, killed in action by a grenade. Otto Sackur, professor at the University of
Breslau, one of the world's leading theorists in chemical thermodynamics, was blown to pieces in
December 1914 while experimenting with explosives. Nature pointed out unsympathetically that he
had never been very good at practical work.

Beginning in the second half of 1915, scientists on the allied side gradually withdrew from the front.
Two main considerations forced their retreat: a vacuum in the stock of munitions and of strategic,
high-tech items previously imported entirely from Germany or Austria, and a yellow gas that rolled
over the battlefield at Ypres in April 1915. War planners awakened to the utility of chemists and
physicists.

The German High Command reached a similar conclusion under the influence of superior French
artillery fire. Almost all the casualties reported in Physikalische Zeitschrift occurred in 1914 and
1915; only one death is indicated for 1916, and none for 1917 or 1918. Available biographical
information supports the obvious inference: Max Born (1882-1970), James Franck (1882-1964),
Gustav Hertz (1881-1975), Alfred Landé (1888-1975), Carl Ramsauer (1879-1955), and many others
were reassigned from the battlefield, sometimes via the hospital, in 1915 and 1916.

The British Prime Minister told Parliament in December 1915 that the government was making every
effort to use scientific manpower as efficiently as possible. The praise of physicists by the

71
Commander in Chief of the British Forces in France helped to overcome the opposition of the
services to advice from outside the military; and by the last year of the war even the critical editor of
Nature could find no fault in the employment of his constituency.

At the outset of war, Britain depended on Germany for magnetos to start its cars and planes, TNT to
supply its artillery, and dyes to distinguish its Army's uniforms from its Navy's. It was humiliating as
well as threatening. To free itself of dependency, Britain needed to copy what propagandists had then
recently diagnosed as the worst feature of German science and life –- organization and discipline.
Similar rhetoric resounded in France, where technical shortages were perhaps more critical even than
in England. To overcome them would require German-style discipline and the surrender of the
individualism that had ennobled the soul, and ruined the industry, of the patrie.

What allied appropriation of German organization could achieve appears most perspicuously through
optical munitions. In 1914 the allies possessed very few facilities for making optical glass or for
fashioning it into instruments of war. Britain's the new Army of a million men needed 70,000 pairs of
field glasses, 15,000 telescopes, 10,000 range finders; not to mention periscopes to peep over
trenches, lenses for cameras for aerial photography, lenses for searchlights, etc. In desperation, the
War Office thought to tap its old source of supply. A deal was worked out, through Swiss
intermediaries, by which the Germans would supply 30,000 binoculars by the end of 1915, and
15,000 a month thereafter, in return for a certain quantity of rubber.

In the event, the British did not have to rely on German optical munitions. The newly mobilized
chemists and physicists rediscovered the formulas and processes used by Zeiss, improved their
efficiency, and oversaw their implementation in newly built factories. Meanwhile their colleagues in
the universities trained "dilutees" (as new, unskilled labor was called) in the assembly of optical
instruments. Women dilutees proved to be excellent lens grinders. By the war's end, the British were
making a quantity of quality glass equal to twice the world's consumption in 1914.

The French achievement in organizing technical production was first revealed to the public in June of
1916, when the Société d'encouragement pour l'industrie nationale held an exposition of items of
domestic manufacture for which France had previously depended on imports from Germany and
Austria. Pride of place went to chemical glassware. The British held similar exhibits, one in July
1916, in Manchester, under the auspices of the Society of Chemical Industry, which also displayed
laboratory glass, serum ampules, radiological apparatus, and optical glass. Two years later, the
British Science Guild celebrated the progress in technical autarky made since 1914. They showed the
by then standard glass apparatus and some interesting novelties, like furnace bricks, tungsten alloys,

72
thermometers, dyes and developers for photography, heavy-duty electrical insulators, graticules for
optical instruments, and thermos flasks, all previously German monopolies.

2. With the Artillery

Surveying and mapping

Mapping the battlefield presented surveyors with unusual problems. Not only did they have to
determine distances with unprecedented accuracy while under fire, they had to do it over and over
again as battle lines shifted, gun emplacements moved, and barbed wire grew or disappeared into the
mud. No adequate maps existed to the scale desired in 1914. The French general staff worked with
maps made at 1:84000, the Germans with Kriegskarte at 1:100000; but what was wanted for trench
warfare were local maps at 1:25000 or, for special cases, 1:10000, one centimeter to 100 meters.

The German Army blamed its defeat at Arras in 1914-15 on its poor maps. In response it set up a
Kriegsvermessungswesen (Military Survey Service) but found it hard to staff. Most of the Prussian
State Survey (a military organization) had gone to the front as cannon fodder. The Army vaunted for
its organization had planned badly for a war of maps. Gradually, however, all serving surveyors and
available geographers, cartographers, and mathematicians were enrolled in the great task of finding
out where things were. Aerial photography helped to correct the older maps once techniques of taking
and deciphering the pictures had been developed; eventually the Germans had a set of maps of the
active Western front drawn to a scale of 1:25000.

The British did not rate the results very highly. Captured German maps, exhibited as trophies at the
Royal Geographical Society, were good only for areas behind the front and excellent only for the
Hindenburg line. How did the Germans, rightly proud of their technology and their mapmaking, fall
so far off the mark? Nature knew. "The German Staff was not scientifically organized."

The French and the British set up their brigades du service géographique and field survey battalions
in the winter of 1914/15, just as the Germans did. But they had the advantage of the full set of French
military maps based on the metric survey of Revolutionary times and good British charts of Belgium.
The allies checked the triangulations of the old maps, added new trigonometric points, put in roads
and villages, and updated the information with aerial photography. These became the basis of
battlefield maps that located strategic sites to within 20 yards at a distance of 9 miles.

Spotting and ranging

73
The obvious way to locate a distant gun is to look at its muzzle flash from two widely separated
stations with known geographical coordinates. Bearing and distance then follow immediately by
triangulation. Spotters stationed in tethered balloons looked behind enemy lines via telescopes and
reported their sightings to the ground by telephone. Evidently, flash spotting did not work in bad
weather or dense smoke or against hidden or far distant batteries.

Sound ranging located a gun by detecting the noise of its discharge at several microphones
interconnected electrically. From the differences in time of arrival of the noise at the microphones
and knowledge of the speed of sound under local conditions, the bearing and distance of the gun may
be obtained, since it must lie on a hyperbola with any pair of microphones as its foci. Combining the
signals from two pairs of microphones, the sound ranger had the position of his target as the
intersection of two hyperbolas.

The system had to discriminate between the noise of a passing shell and the pressure wave from the
piece that fired it; to detect time intervals as small as a hundredth of a second dependably; to give
usable results easily; and to hold up under combat conditions. The first in the field were the French,
who began experimenting in September 1914 with a method proposed by a professor at the Paris
observatory, Lucien Bull. With many improvements it made possible electric registration of the
output of the microphones on moving film or paper, and discrimination of shell sound from muzzle
noise. Clever graphical methods assisted the reduction of the time differences in the reception of the
muzzle noise by the listeners. A good analyst could pinpoint a battery in a minute or two after
receiving the sound-ranging data. The chief error of the method lay in uncertainty about the
temperatures and wind currents prevailing between the gun and the detectors. Repeated
determinations of the same position in serene weather, however, could fix distances to under 20 m at
4 km.

The first units saw action in 1915. Physicists designed significant improvements during 1916; by the
winter of 1916/17 the allies had covered the Western Front with effective sound ranging units
commanded by officers thoroughly versed in physics and mathematics. Behind the lines the rangers
ran a school and even an academy, where officers could teach one another the lessons of their battle
experience. There were also experimental facilities with good shops and staffs dedicated to the
improvement of the art.

Together with the mapmakers, the sound rangers transformed artillery practice. Previously the big
guns had been directed primarily at the trenches and their wire fortifications. But beginning with the

74
breakthrough in Cambrai (1917), the guns first took out their opposite numbers, without trial shots.
That surprised the enemy. The Americans, trained on the British system with the help of the
physicists W.L. Bragg and Charles Galton Darwin (1887-1962), did very well at their first test,
wiping out guns that the French had not been able to locate.

Where were the German sound rangers? The Artillerieprufungskommission (Artillery Testing
Commission) brought together some of the best physicists in the Vaterland to work on the problem
beginning in 1915: Born, Landé, Max Wien, Rudolf Ladenburg (1882-1952), Erwin Madelung (1881-
??), Ferdinand Kurlbaum (1857-1927). They worked out a system in which oscillographs displayed
and analyzed the data from the microphones. But the General Staff could not be persuaded to allocate
the scarce resources needed to deploy the system and well into 1918 German sound rangers used
stopwatches to determine time intervals. The subjective punching of the stop watch, although
performed by men selected and trained to have the same reaction times, was not only less accurate
then electric registration, but also much slower. The Americans estimated that what they could do in
two minutes took the Germans an hour. Why this backwardness? According to the head of the
Kriegsvermessungswesen, because the artillery officers disliked learnedness.

The weather front

By 1914, all civilized nations possessed extensive networks of weather stations and central
meteorological institutes linked by telegraph lines. The national systems collaborated freely: an
important part of their data and of their self-image came from their international organization. In
September 1914, the British Meteorological Office stopped issuing its usual weather information;
after the gas attack at Ypres, it stopped public forecasting; during the last four months of the war, the
British government would not allow any mention of the weather in the newspapers.

The Germans considered weather forecasting important enough to the success of their arms that they
brought meteorologists into Belgium with the invading force. These men immediately replaced the
native staff at the observatory in Liège, took over the headquarters of the Belgian weather service in
Brussels, and commandeered a factory to make hydrogen for sounding balloons to sample conditions
in the upper atmosphere. They then prognosticated weather for the great attack on Antwerp and for
air raids on Britain.

In France, despite efforts begun in 1911, the armed forces had no meteorological service at the
outbreak of war. No more did the United States or the United Kingdom. The conduct of the war in
1915 showed the allies the error of this conception. The gassing of Ypres in unusually favorable

75
weather made clear the advantage of knowing in advance the direction of winds above the battlefield;
in direct response the British and the French set up meteorological services within their armed forces
and brought in experts from universities and from the existing civilian weather bureaus. The same
personnel provided the exact knowledge necessary to correct the standard gunnery tables.

Hydrogen-filled balloons, some with self-registering instruments, procured the information needed
for exact shelling, for attack with and defense against poison gas, and for guidance of aviators. By
the war's end the allied artillery commands received reports of air temperature, density, and velocity
at several heights every four hours from observations made at central field stations. The British, who
in the year 1913/14 launched so few balloons that no one bothered to count them, were sending up
over 13,000 a month in 1918. Austrian and German meteorologists provided similar quantities of
information in similar ways.

The cut off of information during the war forced the neutral Norwegians to replace data with theory.
Contemplating the struggles of huge armies stretched against one another across Belgium and France,
they got the idea of a global battle between polar and equatorial air masses. In conscious analogy to
the shooting war below them, a group headed by the physicist Vilhelm F.K. Bjerknes (1862-1951)
developed the concept, and the terminology, of the weather front. The head of the French navy's
meteorological service, Jules Rouche, hit on the same inescapable image. "To use a comparison from
current affairs, the synoptic [weather] map, showing us the disposition of atmospheric forces, is
analogous to a map giving the exact position of enemy forces."

3. Elsewhere

In the Air

In 1914 the armies of Germany, France, and Britain together had fewer than 500 airplanes and used
them primarily for training. At the Armistice Britain alone had 22,000 planes and an air force of
290,000 men. In a monumental competition and unintentional cooperation (via capture of enemy
aircraft with the latest inventions), the belligerent countries improved the performance of the weak,
unsteady, unreliable biplane of 1914 into a strong and dependable machine by 1919. Between 1914
and 1917 the average speed of fighters almost doubled, from 120 to 200 km/hour; attainable ceilings
exactly doubled, from 3000 to 6000 meters. The pressure for better planes forced the belligerents to
invest in aerodynamics.

The world's leading theorist of the dynamics of flight was Ludwig Prandtl (1875-1973), director of

76
the Institute for Technical Physics at the University of Göttingen. By 1914, Prandtl had adapted his
insights into viscous flow and drag in fluids to the problems of flight; had discovered the boundary
layer of viscid flow over airfoils, one of the most important contributions ever made to aerodynamics;
built a wind tunnel to test his ideas and devised corrections for scaling from the laboratory to the
field; and had attracted several good students. His financing and influence remained meager,
however, until the war. His strategic importance was then recognized and in 1917 the Kaiser-
Wilhelm-Gesellschaft opened its Aerodynamische Versuchsanstalt with Prandtl as director. He
pursued such capital problems as drag induced by vortices trailed by airfoils.

Prandtl's most distinguished student, Theodor von Kármán (1881-1963), spent the war as head of the
Austro-Hungarian air corps' experimental division, where he improved the fire power and shielding of
imperial airplanes. The mathematician Richard von Mises (1883-1953) served the same air corps as
instructor of its technical staff. Another of Prandtl's students, Max Munk, so distinguished himself in
aerodynamic work of importance to the military that immediately after the war President Woodrow
Wilson issued an order cleansing him of the stigma of enemy alien and appointing him an American
government scientist.

On the allied side, the wind tunnels created by Gustav Eiffel (1832-1923) beside his tower in the
Champs de Mars and by Leonard Bairstow (a graduate of the Royal College of Science who became a
fellow of the Royal Society in 1917) at the national Physical Laboratory made tests on model aircraft
and airfoils that influenced the design of war planes. Eiffel found the great decrease in drag
consequent on the transition from laminar to turbulent flow in the boundary layer; Bairstow's
experiments on the size and placement of the tail, wing forms, and so on, were incorporated at the
Royal Aircraft Factory into the design of some of the most important British war planes. To develop
the designs further the Factory engaged some of the country 's leading physicists: Frederick
Alexander Lindemann (the future Lord Cherwell, 1886-1957) and his arch-rival Henry Tizard (1885-
1959), Francis Aston (1877-1945), Geoffrey Ingram Taylor (1886-1975), and George Paget Thomson
(1892-1975). In France, the Duc Maurice de Broglie (1875-1960), his brother Prince Louis (1892-
1987), and other future professors worked on the equipment and improvement of airplanes.

The United States established a central agency, the National Advisory Council on Aeronautics, in
1915, to coordinate and, after 1920, to funnel federal funds into, aeronautical research. One of the
first pieces of advice the Council gave the U.S. Army Air Corps was to scrap the 55 planes it
possessed when it entered the war. Lagging behind its allies, the Air Corps decided to adapt their
successful airframes and to channel Yankee ingenuity into the creation of a new power plant. The
adaptation involved preparing tens of thousands of technical drawings in order to reduce European

77
practice, which still made some use of hand-made and hand-fitted parts, to the methods of American
mass production. Many parts were tested in wind tunnels or under controlled static stresses under the
supervision of an MIT graduate and professor, Alexander Klemin. The major American contribution
to wartime aeronautics was the famous 12-cylinder Liberty engine. These initiatives contributed
significantly to the transformation of homemade aircraft to properly engineered machines. Although
their full force was not felt before the peace, American output of men and materiel for the air war was
not inconsiderable: over 10,000 planes, 16,000 engines, 8600 pilots, and an Air Corps of 200,000
men.

In the ether

The armies and navies of the belligerent powers were almost as poorly equipped for battle in the ether
as in the air. But like the airplane, the radio had reached a state of development in 1914 that allowed
very rapid development once its strategic importance was realized. The situation immediately before
the war may be indicated by the choices available to the U.S. Navy in 1913, when it decided to create
a long-distance wireless communication system. One system, based on the rapid repetition of sparks,
gave an uncomfortably broad band but had proved itself reliable over a range of around 1000 mile.
Its competitor, based on the periodic striking of an arc in a partial vacuum, gave a narrower band but
used less familiar technology. The arc outdistanced the spark in sea trials and the Navy
commissioned a range of arc installations ranging from the 30 kV set demonstrated in the trials up to
untested units of 500 kV for shore stations.

In 1918 the Navy ordered four 1000 kw arc generators for a link between Washington and Bordeaux;
the antenna on the French side stretched over 48,000 square meters and rested on eight masts, each
only a little shorter than the Eiffel tower. The war ended before the towers could be completed.
Nonetheless, the French government decided to proceed with what it called the Lafayette station and
the U.S. Navy rushed the job through lest (so the official report states) its servicemen succumb to the
good wine and bad women of Bordeaux. When operations began in December 1919, the station's
signal received in San Francisco was four to eight times stronger than signals from other major
European transmitters. The remaining two arc converters rusted away in a warehouse until 1931,
when their owners gave one to Ernest Lawrence for reworking into one of the oddest legacies of the
Great War, the 27-inch Berkeley cyclotron.

The arc system and the spark generator were obsolescent technologies when the U.S. Navy chose
between them. Around 1912 physicists and electrical engineers realized that the triode invented by
Lee Deforest (1873-1961) in 1904, which had found some use as an amplifier in wireless receivers,

78
could be used to generate a constant wave at radio frequencies and, as it turned out, at high power. At
the same time Bell Labs was perfecting the use of the same sort of valves as amplifiers in long-
distance telegraphy. In 1915 the first transcontinental telephone line was completed. Military needs
dramatically increased the demand for robust valves for telephony and radio: the U.S. made 400
valves a week before entering the war, 80,000 a week in 1919; to help install and deploy them for
war, Bell Labs sent 4500 engineers and technicians, divided into fourteen battalions, to Europe. The
Germans scaled up in a similar way. The German army had 6350 men in all their communications
units in 1914; at war's end they numbered 190,000.

Throughout the war physicists and engineers strove to deliver a reliable airborne radio.
Reconnaissance from the air was greatly hampered by want of a quick and convenient way to relay
information to the ground or to friendly aircraft. Without radio a pilot who spotted an enemy battery
had to signal by mirror or smoke; and pilots in the same squadron could not communicate with one
another when aloft. By the end of October 1914, the French had small spark units driven by batteries
in a few planes; but noise of the engine, not to mention the bulkiness of the units and the danger of
fire from the spark, limited their effectiveness. Some of these failings were gradually overcome by
improving magnetos, shielding, and headgear in the plane, and the receiver on the ground. Maurice
de Broglie, Henri Abraham (1868-1943), Léon Brillouin (1889-1943), Eugène Bloch (1878-??) and
other physicists working under Gustav-Auguste Ferrié at the Laboratoire de Radiotélégraphie
Militaire contributed to the solutions. The British installed vacuum-tube oscillators on some of their
bombers late in 1915, but lack of appropriate valves hampered development and deployment. The
Americans made airborne radiotelephony and the mass production of the necessary valves a high
priority; and by the Armistice had effective sets for two-way transmission between planes and from
planes to ground. The Chief Signal Officer of the U.S. Army rated airborne radiotelephony "one of
the most spectacular achievements of the whole war," as indeed it was, and of "inestimable industrial
and scientific value."

In the water

The Allied blockade strengthened the hands of elements in the German military, who called
for all-out war, for the use of poison gas, and for the torpedoing of every vessel on the high
seas. This last policy risked (and succeeded in) bringing the United States into the war.
German U-Boat enthusiasts had argued that if they could sink 600,000 tone of allied shipping
per month for six months, they could starve Britain into surrender. The toll met the goal, but
did not achieve the purpose: the six-months' loss of 3.9 million tons was replaced by
American shipyards and by requisitions and confiscations of other vessels. Still the sinking

79
went on, another 4.5 million tons by the war's end, and the allies mounted a crash program to
detect submerged submarines. What progress the French and British had made by May 1917
was revealed to the United States (along with many other new instruments of war) by a
special delegation of their leading scientists.

Ernest Rutherford (1871-1937) led the British contingent. For two years or so he had been studying
how sound moved through water in tanks he had had installed in the basement of his laboratory in
Manchester. When the work moved to a navy base, William Henry Bragg (1862-1942) took charge
of the scientific side of the work there. The method they developed relied on the human ear improved
by "hydrophones" for picking up sound at sea. A patrol boat carried or towed two phones
symmetrically placed on either side of its keel and connected by air-filled tubes to the listener's ears.
To determine where an underwater sound came from, the listener turned the phones until he judged
the noise levels in his ears to be equal. The source of the noise then lay on the perpendicular bisector
of the line joining the phones. Rutherford and his colleagues had many practical problems to
overcome before this passive system could be made operational, in particular, designing phones and
tubes able to amplify weak underwater sounds and blocking out noises arising from the patrol boat's
own machinery.

An active approach -- sending out sound waves and monitoring their reflection from the
object of search -- had much to recommend it. It could work even when the enemy made no
noise and operated at a special frequency that distinguished it from the search ship's sounds.
Only sound of very high frequencies, far above the audible range, could travel in water
without unacceptable loss of energy. Paul Langevin (1872-1946) and several Fraench
physicists had begun to develop an active detection system based on "ultra sound" using as
generator a device based on the piezo-electric effect. The French, Rutherford's group, and
American physicists all tried to make a workable active detection system based on ultra
sound. A practical apparatus arrived too late to see much action during the war. It
subsequently became the basis of sonar, an instrument useful in peacetime to locate icebergs
and oil deposits.

A contemporary observer reckoned that the war had accelerated "the application of science to the uses
of humanity" by 50 or 100 years. Demobilization gave civilian industries manpower skilled in the
operation and maintenance of electrical power plants, internal combustion engines, airplanes,
telephones, and wireless in numbers far greater than could have been produced in peacetime. An
example from one of the smaller theaters of war will give an idea of the scale. In June 1917 the unit
of the Austro-Hungarian Army responsible for electrotechnology had almost 12,000 officers and

80
men; they exceeded in number all the graduates in physics and electrical engineering from all the
Austrian and Hungarian universities and higher technical schools during the preceding decade.

The application to peacetime uses of devices and manpower from the physicists' war came swiftly.
By the early 1920s men familiar with high-potential electrical apparatus were electrifying the
railroads of Europe and the United States. Advances in military wireless underpinned commercial
broadcasting. Former bombers and their pilots adapted readily to passenger and mail flights.
European and American airlines, none of which existed in 1914, flew over ten million scheduled
kilometers in 1924, the equivalent of fifteen round trips to the moon.

Sources

Auerbach, Felix. Die Physik im Kriege. 3rd edn. Jena: G. Fischer, 1916.

Ayres, Leonard P. The war with Germany. A statistical summary. Washignton: GPO, 1919.

British Scientific Products Exhibition. Descriptive catalogue ...with articles on recent developments. London:
British Science Guild, 1919.

Brunoff, Maurice de, ed. L'aéronautique pendant la guerre. Paris: 1920.

Eckert, Max. "Die Kartographie im Kriege." Geographische Zeitschrift, 26 (1920), 273-286; 27 (1921), 18-28.

Exner, Felix M. von. "Meteorologische Erfahrungen im Kriege." Verein zur Verbreitung


naturwissenschaftklicher Kenntnisse, Wien. Schriften, 58 (1918), 219-252.

[France.] Ministère de la Guerre. Notice sommaire sur l'écoute (Bruits souterrains, bruits aériens, etc.). Paris:
Lardre, 1918.

Hinman, Jesse R. Ranging in France with flash and sound. Portland, OR: Dunham, 1919.

Lindet, L. Exposition du matériel de laboratoire de fabrication exclusivement française. Paris: Renouard,


1916.

Parsons, Charles A. "Science and engineering and the war." British Association for the Advancement of
Science. Reports, 1919, 3-23.

Rudin, Robert Pollock Ritter von. Die Elektrotechnik im Kriege. Vienna: Verlag für Fachliteratur, 1919.

Schmid, Bastian, ed. Deutsche Naturwissenschaft, Technik und Erfindung im Weltkriege. Munich-Leipzig:
1919.

Schwarte, Max, ed. Technik im Weltkriege. Berlin: Mittler, 1920.

81
Schwarte, Max, ed. Der Grosse Krieg, 1914-1918. 10 vols., Leipzig: Barth, 1921-33. Vol. 8: Die
Organization der Kriegsführung.

Shaw, W. Napier. "Meteorology." Royal Meteorological Society, Quarterly journal, 45 (1919), 95-111.

[U.S.] War Department. Annual report. Report of the Chief Signal Officer. Washington: GPO, 1919.

Yerkes, Robert M., ed. The new world of science: Its development during the war. New York: Century, 1920.

Studies

Aitken, Hugh G.J. The continuous wave. Technology and American radio, 1900-1932. Princeton: Pinceton
Univ. P., 1985.

Bilstein, Roger E. Flight in America. Baltimore: Johns Hopkins Univ. P., 1994.

Chasseaud, Peter. Artillery's astrologers: A history of British survey and mapping on the Western Front 1914-
1918. Lewes (UK): Mapbooks, 1999.

Forman, Paul, and José M. Sanchez-Ron, eds. National military establishments and the advancement of science
and technology. Dordrecht: Kluwer, 1996.

Galison, Peter, and Alex Roland, eds. Atmospheric flight in the twentieth century. Dordrecht: Kluwer, 2000.

Hackmann, Willem. Seek and strike. Sonar, anti-submarine warfare and the Royal Navy 1914-54. London:
HMSO, 1984.

Innes, John R., ed. Flash spotters and sound rangers. London: George Allen and Unwin, 1935.

Kevles, Daniel J. The physicists. New York: Knopf, 1978.

MacLeod, Roy M., and E. Kay Andrews. "Government and the optical industry in Britain, 1914-1918." In Jay
M. Winter, ed. War and economic development. Cambridge: Cambridge Univ. P., 1975. Pp. 165-203.

Prochasson, Christophe, and Anne Rasmussen. Au nom de la patrie. Les intellectuels et la première guerre
mondiale (1910-1919). Paris: la Découverte, 1996.

Ranc, Albert. Les ingénieurs et la guerre. La mobilisation scientifique et technique. Paris: Chiron, 1922.

82

Você também pode gostar