Você está na página 1de 4

Document Uploaded by J.D.

Lim

History and Development of Statistics

Simple forms of statistics have been used since the beginning of civilization, when pictorial
representations or other symbols were used to record numbers of people, animals, and inanimate
objects on skins, slabs, or sticks of wood and the walls of caves. Before 3000 BC the
Babylonians used small clay tablets to record tabulations of agricultural yields and of
commodities bartered or sold. The Egyptians analyzed the population and material wealth of
their country before beginning to build the pyramids in the 31st century bc. The biblical books of
Numbers and 1 Chronicles are primarily statistical works, the former containing two separate
censuses of the Israelites and the latter describing the material wealth of various Jewish tribes.
Similar numerical records existed in China before 2000 BC. The ancient Greeks held censuses to
be used as bases for taxation as early as 594 BC.

The Roman Empire was the first government to gather extensive data about the population, area,
and wealth of the territories that it controlled. During the Middle Ages in Europe few
comprehensive censuses were made. The Carolingian kings Pepin the Short and Charlemagne
ordered surveys of ecclesiastical holdings: Pepin in 758 and Charlemagne in 762. Following the
Norman Conquest of England in 1066, William I, king of England, ordered a census to be taken;
the information gathered in this census, conducted in 1086, was recorded in the DOMESDAY
BOOK.

Some scholars pinpoint the origin of statistics to 1662, with the publication of Natural and
Political Observations upon the Bills of Mortality by John Graunt. Early applications of
statistical thinking revolved around the needs of states to base policy on demographic and
economic data, hence its stat- etymology. The scope of the discipline of statistics broadened in
the early 19th century to include the collection and analysis of data in general. Today, statistics is
widely employed in government, business, and the natural and social sciences.

Because of its empirical roots and its focus on applications, statistics is usually considered to be
a distinct mathematical science rather than a branch of mathematics. Its mathematical
foundations were laid in the 17th century with the development of probability theory by Blaise
Pascal and Pierre de Fermat. Probability theory arose from the study of games of chance. The
method of least squares was first described by Carl Friedrich Gauss around 1794. The use of
modern computers has expedited large-scale statistical computation, and has also made possible
new methods that are impractical to perform manually.

G. Achenwall is usually credited with being the first to use the word " statistics," but statistics, in
the modern sense of the word, did not really come into existence until the publication (1761) by
J. P. Sussmilch, a Prussian clergyman, of a work entitled Die glittliche Ordnung in den
Veranderungen des menschlichen Geschlechts aus der Geburt, dem Tode, and der Fortpflanzung
desselben erwiesen. In this book a systematic attempt was made to make use of a class of facts
which up to that time had been regarded as belonging to "political arithmetic," under which
description some of the most important problems of what modern writers term "vital statistics"
had been studied, especially in England. Sussmilch had arrived at a perception of the advantage
of studying what Quetelet subsequently termed the "laws of large numbers." He combined the
method of "descriptive statistics" with that of the "political arithmeticians," who had confined

Some Rights Reserved http://projectdennio.blogspot.com/


Document Uploaded by J.D. Lim

themselves to investigations into the facts regarding mortality and a few other similar subjects,
without much attempt at generalizing from them.

The mathematical methods of statistics emerged from probability theory, which can be dated to
the correspondence of Pierre de Fermat and Blaise Pascal (1654). Christiaan Huygens (1657)
gave the earliest known scientific treatment of the subject. Jakob Bernoulli's Ars Conjectandi
(posthumous, 1713) and Abraham de Moivre's Doctrine of Chances (1718) treated the subject as
a branch of mathematics.[1] In the modern era, the work of Kolmogorov has been instrumental
in formulating the fundamental model of Probability Theory, which is used throughout statistics.

The theory of errors may be traced back to Roger Cotes' Opera Miscellanea (posthumous, 1722),
but a memoir prepared by Thomas Simpson in 1755 (printed 1756) first applied the theory to the
discussion of errors of observation. The reprint (1757) of this memoir lays down the axioms that
positive and negative errors are equally probable, and that there are certain assignable limits
within which all errors may be supposed to fall; continuous errors are discussed and a probability
curve is given.

Pierre-Simon Laplace (1774) made the first attempt to deduce a rule for the combination of
observations from the principles of the theory of probabilities. He represented the law of
probability of errors by a curve. He deduced a formula for the mean of three observations. He
also gave (1781) a formula for the law of facility of error (a term due to Joseph Louis Lagrange,
1774), but one which led to unmanageable equations. Daniel Bernoulli (1778) introduced the
principle of the maximum product of the probabilities of a system of concurrent errors.

The method of least squares, which was used to minimize errors in data measurement, was
published independently by Adrien-Marie Legendre (1805), Robert Adrain (1808), and Carl
Friedrich Gauss (1809). Gauss had used the method in his famous 1801 prediction of the location
of the dwarf planet Ceres. Further proofs were given by Laplace (1810, 1812), Gauss (1823),
Ivory (1825, 1826), Hagen (1837), Bessel (1838), Donkin (1844, 1856), Herschel (1850),
Crofton (1870), and Thiele (1880, 1889).

Other contributors were Ellis (1844), De Morgan (1864), Glaisher (1872), and Giovanni
Schiaparelli (1875). Peters's (1856) formula for r, the "probable error" of a single observation
was widely used and inspired early robust statistics (resistant to outliers).

In the nineteenth century authors on statistical theory included Laplace, S. Lacroix (1816),
Littrow (1833), Dedekind (1860), Helmert (1872), Laurant (1873), Liagre, Didion, De Morgan,
Boole, Edgeworth, and K. Pearson. Adolphe Quetelet (1796-1874), another important founder of
statistics, introduced the notion of the "average man" (l'homme moyen) as a means of
understanding complex social phenomena such as crime rates, marriage rates, or suicide rates.

Charles S. Peirce (1839--1914) formulated frequentist theories of estimation and hypothesis-


testing in (1877--1878) and (1883), in which he introduced "confidence". Peirce also introduced
blinded, controlled randomized experiments with a repeated measures design. Peirce invented an
optimal design for experiments on gravity.

Some Rights Reserved http://projectdennio.blogspot.com/


Document Uploaded by J.D. Lim

At present, statistics is a reliable means of describing accurately the values of economic,


political, social, psychological, biological, and physical data and serves as a tool to correlate and
analyze such data. The work of the statistician is no longer confined to gathering and tabulating
data, but is chiefly a process of interpreting the information. The development of the theory of
PROBABILITY increased the scope of statistical applications. Much data can be approximated
accurately by certain probability distributions, and the results of probability distributions can be
used in analyzing statistical data. Probability can be used to test the reliability of statistical
inferences and to indicate the kind and amount of data required for a particular problem.

Mathematicians Behind Statistics

• John Graunt (English, 1620–1674): Pioneer of demography who produced the first life
table
• Thomas Bayes (English, 1702–1761): Developed the interpretation of probability now
known as Bayes theorem
• Pierre-Simon Laplace (French, 1749--1827): Co-invented Bayesian statistics. Invented
exponential families (Laplace transform), conjugate prior distributions, asymptotic analysis
of estimators (including negligibility of regular priors). Used maximum-likelihood and
posterior-mode estimation and considered (robust) loss functions
• William Playfair (Scottish, 1759–1823): Pioneer of statistical graphics
• Carl Friedrich Gauss (German, 1777–1855): Invented least squares estimation methods
(with Legendre). Used loss functions and maximum-likelihood estimation
• Adolphe Quetelet (Belgian, 1796–1874): Pioneered the use of probability and statistics in
the social sciences
• Florence Nightingale (English, 1820–1910): Applied statistical analysis to health
problems, contributing to the establishment of epidemiology and public health practice.
Developed statistical graphics especially for mobilizing public opinion. First female
member of the Royal Statistical Society.
• Francis Galton (English, 1822–1911): Invented the concepts of standard deviation,
correlation, regression
• Thorvald N. Thiele (Danish, 1838-1910): Introduced cumulants and the term "likelihood".
Introduced a Kalman filter in time-series
• Charles S. Peirce (United States, 1839-1914): Formulated modern statistics in "Illustrations
of the Logic of Science" (1877--1878) and "A Theory of Probable Inference" (1883). With
a repeated measures design, introduced blinded, controlled randomized experiments
(before Fisher). Invented optimal design for experiments on gravity, in which he "corrected
the means". Used logistic regression, correlation, smoothing, and improved the treatment
of outliers. Introduced terms "confidence" and "likelihood" (before Neyman and Fisher).
While largely a frequentist, Peirce's possible world semantics introduced the "propensity"
theory of probability. See the historical books of Stephen Stigler
• Francis Ysidro Edgeworth (Ireland and England, 1845-1926): Revived exponential
families (Laplace transforms) in statistics. Extended Laplace's (asymptotic) theory of
maximum-likelihood estimation. Introduced basic results on information, which were
extended and popularized by R. A. Fisher

Some Rights Reserved http://projectdennio.blogspot.com/


Document Uploaded by J.D. Lim

• Karl Pearson (English, 1857–1936): Numerous innovations, including the development of


the Pearson chi-squared test and the Pearson correlation. Founded the Biometrical Society
and Biometrika, the first journal of mathematical statistics and biometry
• Charles Spearman (English, 1863–1945): Extended the Pearson correlation coefficient to
the Spearman's rank correlation coefficient
• William Sealy Gosset (known as "Student") (English, 1876–1937): Discovered the Student
t distribution and invented the Student's t-test
• Ronald A. Fisher (English, 1890–1962): Wrote the textbooks and articles that defined the
academic discipline of statistics, inspiring the creation of statistics departments at
universities throughout the world. Systematized previous results with informative
terminology, substantially improving previous results with mathematical analysis (and
claims). Developed the analysis of variance, clarified the method of maximum likelihood
(without the uniform priors appearing in some previous versions), invented the concept of
sufficient statistics, developed Edgeworth's use of exponential families and information,
introducing observed Fisher information, and many theoretical concepts and practical
methods, particularly for the design of experiments

Uses and Importance of Statistics to Computer Students

Statistics is primarily used either to make predictions based on the data available or to make
conclusions about a population of interest when only sample data is available. In both cases
statistics tries to make sense of the uncertainty in the available data. When making predictions
statisticians determine if the difference in the data points are due to chance or if there is a
systematic relationship. The more the systematic relationship that is observed the better the
prediction a statistician can make. The more random error that is observed the more uncertain the
prediction.

Statisticians can provide a measure of the uncertainty to the prediction. When making inference
about a population, the statistician is trying to estimate how good a summary statistic of a sample
really is at estimating a population statistic.

For computer students, knowing the basic principles and methods in statistics could help them in
doing their research work like comparing the speed of internet connection in different countries
and the probability of how many times does each experience the same level of internet
connection speed in a week, month or year. It could also be helpful in determining the best
operating system to use. Whenever there is the need to compare data and know the best option
that we should take statistics can give the answer.

Some Rights Reserved http://projectdennio.blogspot.com/

Você também pode gostar