Escolar Documentos
Profissional Documentos
Cultura Documentos
Cross Validated
QUESTIONS
TAGS
USERS
BADGES
UNANSWERED
ASK QUESTION
Take the 2-minute tour Cross Validated is a question and answer site for people
interested in statistics, machine learning, data analysis, data mining, and data
visualization. It's 100% free, no registration required.
What is randomness?
up vote
10
down vote
favorite
7
In probability and statistics, the concept of "random" and "randomness" are
frequently used. Often the concept of a random variable is used to model events
that occur due to chance.
My question regards the term "random". What is random? Does randomness really
exist?
I am curious what people that have a lot of experience in working with random
events think and believe about randomness.
interpretation terminology
shareimprove this question
asked Apr 11 '12 at 11:47
community wiki
Andrew
In approaching the rather less well-posed question 'does randomness really exist?'
it's helpful to ask yourself whether vectors 'really' exist. And when you have a view
about that, asking yourself a) whether it's surprising or not that polynomials are
vectors, b) whether and how we could be wrong about that, and finally c) whether,
e.g. forces in physics are the things that vectors 'are' in the sense of the question.
Probably none of these questions will help much understanding what's going on in
the forum, but they will bring out the relevant issues. You might start here and then
follow up the other Stanford Encyclopaedia entries on philosophy of probability and
statistics.
There is a lot of discussion there, thankfully not much found around here, about the
existence and relevance of 'actual' physical randomness, usually of the quantum
variety some of which is (usefully) gestured toward by @dmckee in the comments
above. There's also the idea that randomness as some sort of uncertainty. Within
the minimal framework of Cox it can be reasonable to think of (suitably tidied up)
uncertainties as being isomorphic with probabilities, so such uncertainties are, by
virtue of that connection, treatable as if they are random. Clearly the theory of
repeated sampling also makes use of probability theory, by virtue of which its
quantities are random. One or other of these frameworks will cover all the relevant
aspects of randomness that I've ever seen in these forums.
There are legitimate disagreements about what should and should not be modeled
as random, which you can find under the banners Bayesian and Frequentist, but
these positions only suggest but do not full determine the meaning of the
randomness involved, just the scope.
community wiki
conjugateprior
3
+1 for introducing many thoughtful concepts into the discussion. I would like to
suggest it may help to maintain a sharper distinction between randomness and
uncertainty: one leads to the other, but not vice versa, yet many people (obviously
not you!) exhibit some confusion about the difference. We know that not all
uncertainty comes from randomness, nor is all that is arbitrary or variable
necessarily "random" in the technical sense employed in statistical practice.
whuber Apr 11 '12 at 15:22
I guess you're identifying random with sampling variability, which is obviously fine. I
was trying to separate three things: the probability theory, the things that vary in
repeated sampling, and uncertainty about stuff. (A strong and controversial
connection claimed for the connections between them that might interest you is
Lewis's 'Principal Principle' from 'A Subjectivists Guide to Objective Chance'.)
conjugateprior Apr 11 '12 at 23:45
Please don't read that much into my comment: I had no intention of identifying
randomness with sampling variability! I just wanted to call (positive) attention to
some of the points you make. To agree or disagree with them would require a
lengthy detailed analysis. (To get a sense of the kind of analysis involved, the article
at plato.stanford.edu/entries/chance-randomness/#4 is of interest. But please don't
assume that I hold with all the assertions in that article just because I am drawing
attention to it!) whuber Apr 12 '12 at 13:32
@whuber Oh OK. And thanks for plug :-) conjugateprior Apr 12 '12 at 15:16
add a comment
up vote
4
down vote
If we assume that we are living in a deterministic (everything that happens is
predetermined and given the same exact situation, the same exact things will
happen), then there is no "random" at all.
In this case, "randomness" is merely used to represent what might happen given
our limited knowledge. If we had perfect knowledge of a system, nothing would be
random.
community wiki
Andrew
Well put @dmckee. I'll point out that, while most people believe Quantum
Mechanics states without doubt that the world is non-deterministic, this is not
actually true - that is just one interpretation of quantum mechanics (which happens
to be the most popular), but there are other, deterministic interpretations out there.
BlueRaja - Danny Pflughoeft Apr 11 '12 at 16:21
3
@BlueRaja-DannyPflughoeft: Pay attention to what I wrote: either there is nondeterminism or there is non-local information and you can not have complete
knowledge. There is no point in bringing the interpretation of quantum mechanics
into the discussion because the situation is independent of which interpretation you
choose. dmckee Apr 11 '12 at 16:26
show 1 more comment
up vote
3
down vote
My definition of random would be unpredictable, i.e. you can never know with 100%
certainty the outcome of an event, although you might be able to put a bound of
the range of possibilities. A simple example would be rolling a fair dice: you can
never know exactly which number will come up with each roll, but you do know it
will be one of the numbers 1 through 6.
community wiki
babelproofreader
This would imply that randomness is "subjective". Since one's predictability of the
future varies with knowledge and tools. This would be closer to the Bayesian view
point. Memming Apr 11 '12 at 16:14
If one isn't ignorant of the machinery, if in fact one has 100% knowledge of how the
machinery works but this still isn't sufficient to accurately predict outcomes, then
this gap or inability to forecast is unpredictability or randomness. Just as Popper said
that nothing is actually true but only accepted as true until falsified,
babelproofreader says randomness is true, absolute unpredictability and no model,
even an 100% infallibly accurate one, is actually good enough to predict
randomness. This gap between reality and perfect knowledge of the "system"
behind it is randomness. babelproofreader Apr 11 '12 at 21:48
add a comment
up vote
0
down vote
I tend to prefer a probabilistic interpretation of randomness. An event is random if
gaining any additional information does not help you predict its outcome. That is,
the event is unconditionally random. Notationally:
p(A|B)=p(A)B
To put it in concrete terms; if you believe that a die roll (A) is truely random, then
knowing the exact physical state of the die as it is thrown (B) confers no additional
predictive power on the outcome of the toss.
community wiki
2 revs
Lucas
1
This is an intriguing approach, but doesn't it get things reversed? Once we are
certain about an event, no additional amount of information helps us predict it any
better. When an event is random--say, whether Y>0 for a bivariate normal variable
(X,Y)--then additional information, such as the value of X in this case, usually does
"confer additional predictive power" by allowing us to replace Pr(Y>0) by Pr(Y>0|X).
whuber Apr 11 '12 at 16:28
No, the notation is a shorthand where p(Y) should be expanded as p(Y=y). After the
event has occurred, you know it with certainty, i.e. p(Y|Y=y,B) is 1 for Y=y and 0
otherwise. And, yes, knowing B (or X) is usually predictive, but then A wouldn't be
truly random. Lucas Apr 11 '12 at 16:37
Therefore, randomness is only in the future. Once the event has occurred, we know
its value and it is no longer random... even if it were random before. Andrew Apr
11 '12 at 17:26
3
@Andrew: This is probably pedagogical, but it's the process of generating the event
that is random, not the event itself. The event is just a thing. Lucas Apr 11 '12 at
17:38
Sign up or log in
Name
By posting your answer, you agree to the privacy policy and terms of service.
asked
3 years ago
viewed
1214 times
active
3 years ago
System z Q&A
22% committed
Linked
1 What is the English name for a statistics term that I'm looking for?
40 What are the major philosophical, methodological, and terminological differences
between econometrics and other statistical fields?
10 What does Theta mean?
0 Repeated coin flip experiments: what counts as a sample?
2 What is a dimensionless indicator?
10 Can anyone clarify the concept of a sum of random variables
2 What is a semantic map
2 What is I(Xi<X)?
0 Are Probability Distribution and Probability Function the same thing?
4 What does the SEE measure?
Hot Network Questions
SCIENCE
OTHER
Ask Ubuntu
Webmasters
Game Development
TeX - LaTeX
Programmers
Unix & Linux
Ask Different (Apple)
WordPress Development
Geographic Information Systems
Electrical Engineering
Android Enthusiasts
Information Security
Database Administrators
Drupal Answers
SharePoint
User Experience
Mathematica
Salesforce
ExpressionEngine Answers
more (13)
Photography
Science Fiction & Fantasy
Graphic Design
Movies & TV
Seasoned Advice (cooking)
Home Improvement
Randomness
From Wikipedia, the free encyclopedia
1 History
2 In science
o
2.2 In biology
2.3 In mathematics
2.4 In statistics
2.6 In finance
3 In politics
5 Applications
6 Generation
9 See also
10 References
11 Further reading
12 External links
History
Main article: History of randomness
In ancient history, the concepts of chance and randomness were intertwined with that of fate. Many ancient
peoples threw dice to determine fate, and this later evolved into games of chance. Most ancient cultures used
various methods of divination to attempt to circumvent randomness and fate.[3][4]
The Chinese were perhaps the earliest people to formalize odds and chance 3,000 years ago. The Greek
philosophers discussed randomness at length, but only in non-quantitative forms. It was only in the 16th
century that Italian mathematicians began to formalize the odds associated with various games of chance. The
invention of the calculus had a positive impact on the formal study of randomness. In the 1888 edition of his
book The Logic of Chance John Venn wrote a chapter onThe conception of randomness that included his view
of the randomness of the digits of the number Piby using them to construct a random walk in two dimensions.[5]
The early part of the 20th century saw a rapid growth in the formal analysis of randomness, as various
approaches to the mathematical foundations of probability were introduced. In the mid- to late-20th century,
ideas of algorithmic information theory introduced new dimensions to the field via the concept ofalgorithmic
randomness.
Although randomness had often been viewed as an obstacle and a nuisance for many centuries, in the 20th
century computer scientists began to realize that the deliberate introduction of randomness into computations
can be an effective tool for designing better algorithms. In some cases such randomized algorithms outperform
the best deterministic methods.
In science
Many scientific fields are concerned with randomness:
Algorithmic probability
Chaos theory
Cryptography
Game theory
Information theory
Pattern recognition
Probability theory
Quantum mechanics
Statistical mechanics
Statistics
In biology
The modern evolutionary synthesis ascribes the observed diversity of life to natural selection, in which some
random genetic mutations are retained in the gene pool due to the systematically improved chance for survival
and reproduction that those mutated genes confer on individuals who possess them.
The characteristics of an organism arise to some extent deterministically (e.g., under the influence of genes
and the environment) and to some extent randomly. For example, the density of freckles that appear on a
person's skin is controlled by genes and exposure to light; whereas the exact location ofindividual freckles
seems random.[8]
Randomness is important if an animal is to behave in a way that is unpredictable to others. For instance,
insects in flight tend to move about with random changes in direction, making it difficult for pursuing predators
to predict their trajectories.
In mathematics
The mathematical theory of probability arose from attempts to formulate mathematical descriptions of chance
events, originally in the context of gambling, but later in connection with physics. Statistics is used to infer the
underlying probability distribution of a collection of empirical observations. For the purposes of simulation, it is
necessary to have a large supply of random numbers or means to generate them on demand.
Algorithmic information theory studies, among other topics, what constitutes a random sequence. The central
idea is that a string of bits is random if and only if it is shorter than any computer program that can produce that
string (Kolmogorov randomness)this means that random strings are those that cannot be compressed.
Pioneers of this field include Andrey Kolmogorov and his student Per Martin-Lf,Ray Solomonoff, and Gregory
Chaitin.
Randomness occurs in numbers such as log (2) and pi. The decimal digits of pi constitute an infinite sequence
and "never repeat in a cyclical fashion." Numbers like pi are also considered likely to benormal, which means
their digits are random in a certain statistical sense.
Pi certainly seems to behave this way. In the first six billion decimal places of pi, each of the digits from 0
through 9 shows up about six hundred million times. Yet such results, conceivably accidental, do not prove
normality even in base 10, much less normality in other number bases. [9]
In statistics
In statistics, randomness is commonly used to create simple random samples. This lets surveys of completely
random groups of people provide realistic data. Common methods of doing this include drawing names out of a
hat or using a random digit chart. A random digit chart is simply a large table of random digits.
In information science
In information science, irrelevant or meaningless data is considered noise. Noise consists of a large number of
transient disturbances with a statistically randomized time distribution.
In communication theory, randomness in a signal is called "noise" and is opposed to that component of its
variation that is causally attributable to the source, the signal.
In terms of the development of random networks, for communication randomness rests on the two simple
assumptions of Paul Erds and Alfrd Rnyi who said that there were a fixed number of nodes and this number
remained fixed for the life of the network, and that all nodes were equal and linked randomly to each other.
[clarification needed][10]
In finance
The random walk hypothesis considers that asset prices in an organized market evolve at random, in the sense
that the expected value of their change is zero but the actual value may turn out to be positive or negative.
More generally, asset prices are influenced by a variety of unpredictable events in the general economic
environment.
there is only one universal outcome. The rival Bayesian interpretation of probability uses probabilities to
represent a lack of complete knowledge of outcomes.
Chaotic systems are unpredictable in practice due to their extreme sensitivity to initial conditions. In some
disciplines of computability theory, the notion of randomness is identified with computational unpredictability.
Whether or not chaotic systems are computable is a subject of research.
Individual events that are random may still be precisely described en masse, usually in terms of probability or
expected value. For instance, quantum mechanics allows a very precise calculation of the half-lives of atoms
even though the process of atomic decay is random. More simply, although a single toss of a fair coin cannot
be predicted, its general behavior can be described by saying that if a large number of tosses are made,
roughly half of them will show up heads. Ohm's law and the kinetic theory of gases are nonrandom macroscopic phenomena that are assumed random at the microscopic level.
In politics
Random selection can be an official method to resolve tied elections in some jurisdictions.[11] Its use in politics is
very old, as office holders in Ancient Athens were chosen by lot, there being no voting.
Applications
Main article: Applications of randomness
In most of its mathematical, political, social and religious uses, randomness is used for its innate "fairness" and
lack of bias.
Politics: Athenian democracy was based on the concept of isonomia (equality of political rights) and used
complex allotment machines to ensure that the positions on the ruling committees that ran Athens were fairly
allocated. Allotment is now restricted to selecting jurors in Anglo-Saxon legal systems and in situations where
"fairness" is approximated by randomization, such as selecting jurors and military draftlotteries.
Games: Random numbers were first investigated in the context of gambling, and many randomizing devices,
such as dice, shuffling playing cards, and roulette wheels, were first developed for use in gambling. The ability
to produce random numbers fairly is vital to electronic gambling, and, as such, the methods used to create
them are usually regulated by government Gaming Control Boards. Random drawings are also used to
determine lottery winners. Throughout history, randomness has been used for games of chance and to select
out individuals for an unwanted task in a fair way (see drawing straws).
Sports: Some sports, including American Football, use coin tosses to randomly select starting conditions for
games or seed tied teams for postseason play. The National Basketball Association uses a weightedlottery to
order teams in its draft.
Mathematics: Random numbers are also used where their use is mathematically important, such as sampling
for opinion polls and for statistical sampling in quality control systems. Computational solutions for some types
of problems use random numbers extensively, such as in the Monte Carlo method and ingenetic algorithms.
Medicine: Random allocation of a clinical intervention is used to reduce bias in controlled trials
(e.g.,randomized controlled trials).
Religion: Although not intended to be random, various forms of divination such as cleromancy see what
appears to be a random event as a means for a divine being to communicate their will. (See also Free
will and Determinism).
Generation
Main article: Random number generation
The ball in a roulette can be used as a source of apparent randomness, because its behavior is very sensitive to the initial
conditions.
It is generally accepted that there exist three mechanisms responsible for (apparently) random behavior in
systems:
1.
2.
3.
The many applications of randomness have led to many different methods for generating random data. These
methods may vary as to how unpredictable or statistically random they are, and how quickly they can generate
random numbers.
Before the advent of computational random number generators, generating large amounts of sufficiently
random numbers (important in statistics) required a lot of work. Results would sometimes be collected and
distributed as random number tables.
A number is "due"
See also: Coupon collector's problem
This argument is, "In a random selection of numbers, since all numbers eventually appear, those that have not
come up yet are 'due', and thus more likely to come up soon." This logic is only correct if applied to a system
where numbers that come up are removed from the system, such as when playing cards are drawn and not
returned to the deck. In this case, once a jack is removed from the deck, the next draw is less likely to be a jack
and more likely to be some other card. However, if the jack is returned to the deck, and the deck is thoroughly
reshuffled, a jack is as likely to be drawn as any other card. The same applies in any other process where
objects are selected independently, and none are removed after each event, such as the roll of a die, a coin
toss, or most lottery number selection schemes. Truly random processes such as these do not have memory,
making it impossible for past outcomes to affect future outcomes.
When the host reveals one door that contains a goat, this is new information.
Say we are told that a woman has two children. If we ask whether either of them is a girl, and are told yes, what
is the probability that the other child is also a girl? Considering this new child independently, one might expect
the probability that the other child is female is 1/2 (50%). But by building a probability space (illustrating all
possible outcomes), we see that the probability is actually only 1/3 (33%). This is because the possibility space
illustrates 4 ways of having these two children: boy-boy, girl-boy, boy-girl, and girl-girl. But we were given more
information. Once we are told that one of the children is a female, we use this new information to eliminate the
boy-boy scenario. Thus the probability space reveals that there are still 3 ways to have two children where one
is a female: boy-girl, girl-boy, girl-girl. Only 1/3 of these scenarios would have the other child also be a girl.
[13]
Using a probability space, we are less likely to miss one of the possible scenarios, or to neglect the
importance of new information. For further information, see Boy or girl paradox.
This technique provides insights in other situations such as the Monty Hall problem, a game show scenario in
which a car is hidden behind one of three doors, and two goats are hidden as booby prizesbehind the others.
Once the contestant has chosen a door, the host opens one of the remaining doors to reveal a goat, eliminating
that door as an option. With only two doors left (one with the car, the other with another goat), the player must
decide to either keep their decision, or switch and select the other door. Intuitively, one might think the player is
choosing between two doors with equal probability, and that the opportunity to choose another door makes no
difference. But probability spaces reveal that the contestant has received new information, and can increase
their chances of winning by changing to the other door.[13]
See also
Statistics portal
Algorithmic probability
Aleatory
Chaitin's constant
Chance (disambiguation)
Chaos theory
Cryptography
Frequency probability
Game theory
Information theory
Nonlinear system
Pattern recognition
Predictability
Probability interpretations
Probability theory
Pseudorandomness
Quantum mechanics
Statistical mechanics
Statistics
Ulam spiral
References
1.
Jump up^ The Oxford English Dictionary defines "random" as "Having no definite
aim or purpose; not sent or guided in a particular direction; made, done, occurring,
etc., without method or conscious choice; haphazard."
2.
Jump up^ Third Workshop on Monte Carlo Methods, Jun Liu, Professor of
Statistics, Harvard University
3.
Jump up^ Handbook to life in ancient Rome by Lesley Adkins 1998 ISBN 0-19512332-8 page 279
4.
Jump up^ Religions of the ancient world by Sarah Iles Johnston 2004 ISBN 0674-01517-7 page 370
5.
Jump up^ Annotated readings in the history of statistics by Herbert Aron David,
2001 ISBN 0-387-98844-0 page 115. Note that the 1866 edition of Venn's book
(on Google books) does not include this chapter.
6.
7.
8.
9.
Jump up^ "Are the digits of pi random? researcher may hold the key". Lbl.gov.
2001-07-23. Retrieved 2012-07-27.
10. Jump up^ Laszso Barabasi, (2003), Linked, Rich Gets Richer, P81
11. Jump up^ Municipal Elections Act (Ontario, Canada) 1996, c. 32, Sched., s. 62
(3) : "If the recount indicates that two or more candidates who cannot both or all
be declared elected to an office have received the same number of votes, the
clerk shall choose the successful candidate or candidates by lot."
12. Jump up^ Terry Ritter, Randomness tests: a literature survey. ciphersbyritter.com
13. ^ Jump up to:a b Johnson, George (8 June 2008). "Playing the Odds". The New
York Times.
Further reading
Random Measures, 4th ed. by Olav Kallenberg. Academic Press, New York,
London; Akademie-Verlag, Berlin, 1986. MR0854102.
Random by Kenneth Chan includes a "Random Scale" for grading the level of
randomness.
External links
Wikiversity has learning
materials about Random
Look up randomness in
Wiktionary, the free
dictionary.
Wikiquote has quotations
related to: Randomness
Wikimedia Commons has
media related
toRandomness.
Chaos theory
[hide]
Statistics
Outline
Index
[show]
Descriptive statistics
[show]
Data collection
[show]
Statistical inference
[show]
Correlation
Regression analysis
[show]
Applications
Category
Portal
Commons
GND: 4068050-2
Authority control
Categories:
Cryptography
Statistical randomness
Randomness
Navigation menu
Create account
Log in
Read
View source
View history
Go
Main page
Contents
Featured content
Current events
Random article
Donate to Wikipedia
Wikipedia store
Interaction
Help
About Wikipedia
Community portal
Recent changes
Tools
Contact page
Related changes
Upload file
Special pages
Permanent link
Page information
Wikidata item
Create a book
Download as PDF
Printable version
Languages
Afrikaans
Article
Talk
WikiProject
Azrbaycanca
Catal
etina
Dansk
Deutsch
Espaol
Esperanto
Euskara
Franais
Ido
Italiano
Latina
Magyar
Nederlands
Norsk bokml
Polski
Portugus
Romn
Sicilianu
Simple English
Slovenina
Suomi
Svenska
Ting Vit
Edit links
Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may
apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia is a registered
trademark of the Wikimedia Foundation, Inc., a non-profit organization.
Privacy policy
About Wikipedia
Disclaimers
Contact Wikipedia
Developers
Mobile view
Hide
Wiki Loves Monuments: World's largest photography competition in Pakistan.
Photograph a monument, help Wikipedia and win!
Statistical randomness
From Wikipedia, the free encyclopedia
See also: algorithmic randomness
A numeric sequence is said to be statistically random when it contains no
recognizable patterns or regularities; sequences such as the results of an ideal dice
roll, or the digits of exhibit statistical randomness.[1]
Statistical randomness does not necessarily imply "true" randomness, i.e., objective
unpredictability. Pseudorandomness is sufficient for many uses, such as statistics,
hence the name statistical randomness.
Global randomness and local randomness are different. Most philosophical
conceptions of randomness are globalbecause they are based on the idea that "in
the long run" a sequence looks truly random, even if certain sub-sequences would
not look random. In a "truly" random sequence of numbers of sufficient length, for
example, it is probable there would be long sequences of nothing but repeating
numbers, though on the whole the sequence might be random. Local randomness
refers to the idea that there can be minimum sequence lengths in which random
distributions are approximated. Long stretches of the same numbers, even those
generated by "truly" random processes, would diminish the "local randomness" of a
sample (it might only be locally random for sequences of 10,000 numbers; taking
sequences of less than 1,000 might not appear random at all, for example).
A sequence exhibiting a pattern is not thereby proved not statistically random.
According to principles of Ramsey theory, sufficiently large objects must necessarily
contain a given substructure ("complete disorder is impossible"). Chaos theorists
disagree with Ramsey Theory.
Legislation concerning gambling imposes certain standards of statistical
randomness to slot machines.
Contents [hide]
1 Tests
2 See also
3 References
4 External links
Tests[edit]
The first tests for random numbers were published by M.G. Kendall and Bernard
Babington Smith in the Journal of the Royal Statistical Society in 1938.[2] They were
built on statistical tools such as Pearson's chi-squared test that were developed to
distinguish whether experimental phenomena matched their theoretical
probabilities. Pearson developed his test originally by showing that a number of dice
experiments by W.F.R. Weldon did not display "random" behavior.
Kendall and Smith's original four tests were hypothesis tests, which took as their
null hypothesis the idea that each number in a given random sequence had an
equal chance of occurring, and that various other patterns in the data should be
also distributed equiprobably.
The frequency test, was very basic: checking to make sure that there were roughly
the same number of 0s, 1s, 2s, 3s, etc.
The serial test, did the same thing but for sequences of two digits at a time (00, 01,
02, etc.), comparing their observed frequencies with their hypothetical predictions
were they equally distributed.
The poker test, tested for certain sequences of five numbers at a time (aaaaa,
aaaab, aaabb, etc.) based on hands in the game poker.
The gap test, looked at the distances between zeroes (00 would be a distance of 0,
030 would be a distance of 1, 02250 would be a distance of 3, etc.).
If a given sequence was able to pass all of these tests within a given degree of
significance (generally 5%), then it was judged to be, in their words "locally
random". Kendall and Smith differentiated "local randomness" from "true
randomness" in that many sequences generated with truly random methods might
not display "local randomness" to a given degree very large sequences might
contain many rows of a single digit. This might be "random" on the scale of the
entire sequence, but in a smaller block it would not be "random" (it would not pass
their tests), and would be useless for a number of statistical applications.
As random number sets became more and more common, more tests, of increasing
sophistication were used. Some modern tests plot random digits as points on a
three-dimensional plane, which can then be rotated to look for hidden patterns. In
1995, the statistician George Marsaglia created a set of tests known as the diehard
tests, which he distributes with a CD-ROM of 5 billion pseudorandom numbers.
Pseudorandom number generators require tests as exclusive verifications for their
"randomness," as they are decidedly not produced by "truly random" processes, but
rather by deterministic algorithms. Over the history of random number generation,
many sources of numbers thought to appear "random" under testing have later
been discovered to be very non-random when subjected to certain types of tests.
The notion of quasi-random numbers was developed to circumvent some of these
problems, though pseudorandom number generators are still extensively used in
many applications (even ones known to be extremely "non-random"), as they are
"good enough" for most applications.
Other tests:
The Monobit test treats each output bit of the random number generator as a coin
flip test, and determine if the observed number of heads and tails are close to the
expected 50% frequency. The number of heads in a coin flip trail forms a binomial
distribution.
The WaldWolfowitz runs test tests for the number of bit transitions between 0 bits,
and 1 bits, comparing the observed frequencies with expected frequency of a
random bit sequence.
Information entropy
Autocorrelation test
KolmogorovSmirnov test
Main page
Contents
Featured content
Current events
Random article
Donate to Wikipedia
Wikipedia store
Interaction
Help
About Wikipedia
Community portal
Recent changes
Contact page
Tools
What links here
Related changes
Upload file
Special pages
Permanent link
Page information
Wikidata item
Cite this page
Print/export
Create a book
Download as PDF
Printable version
Languages
Espaol
Edit links
This page was last modified on 7 July 2015, at 06:56.
Text is available under the Creative Commons Attribution-ShareAlike License;
additional terms may apply. By using this site, you agree to the Terms of Use and
Privacy Policy. Wikipedia is a registered trademark of the Wikimedia Foundation,
Inc., a non-profit organization.
Privacy policyAbout WikipediaDisclaimersContact WikipediaDevelopersMobile
viewWikimedia Foundation Powered by MediaWiki