Você está na página 1de 19

Introduction

Probability theory is a branch of mathematics concerned with


determining the likelihood that a given event will occur. This
likelihood is determined by dividing the number of selected events
by the number of total events possible. For example, consider a
single die (one of a pair of dice) with six faces. Each face contains a
different number of dots: 1, 2, 3, 4, 5, or 6. If you role the die in a
completely random way, the probability of getting any one of the
six faces (1, 2, 3, 4, 5, or 6) is one out of six.

Probability theory originally grew out of problems encountered by


seventeenth-century gamblers. It has since developed into one of
the most respected and useful branches of mathematics with
applications in many different industries. Perhaps what makes
probability theory most valuable is that it can be used to determine
the expected outcome in any situationfrom the chances that a
plane will crash to the probability that a person will win the lottery.
Part 1
Part 1 (a):

History Of Probability Theory

Probability theory was originally inspired by gambling problems. The earliest work on the
subject was performed by Italian mathematician and physicist Girolamo Cardano (1501
1576). In his manual Liber de Ludo Aleae, Cardano discusses many of the basic concepts
of probability complete with a systematic analysis of gambling problems. Unfortunately,
Cardano's work had little effect on the development of probability because his manual did
not appear in print until 1663and even then received little attention.

In 1654, another gambler named Chevalier de Mr invented a system for gambling that
he was convinced would make money. He decided to bet even money that he could roll at
least one twelve in 24 rolls of two dice. However, when the Chevalier began losing money,
he asked his mathematician friend Blaise Pascal (16231662) to analyze his gambling
system. Pascal discovered that the Chevalier's system would lose about 51 percent of the
time.

Pascal became so interested in probability that he began studying more problems in this
field. He discussed them with another famous mathematician, Pierre de Fermat (1601
1665) and, together they laid the foundation of probability theory.

The probability of rolling snake eyes (two ones) with a pair of dice is 1 in 36. (Reproduced
by permission of Field Mark Publications.)
Methods Of Studying Probabilityods of studying probability

Probability theory is concerned with determining the relationship between


the number of times some specific given event occurs and the number of
times any event occurs. For example, consider the flipping of a coin. One
might ask how many times a head will appear when a coin is flipped 100
times.

Determining probabilities can be done in two ways: theoretically and


empirically. The example of a coin toss helps illustrate the difference
between these two approaches. Using a theoretical approach, we reason that
in every flip there are two possibilities, a head or a tail. By assuming each
event is equally likely, the probability that the coin will end up heads is or
0.5.

The empirical approach does not use assumptions of equal likelihood.


Instead, an actual coin flipping experiment is performed, and the number of
heads is counted. The probability is then equal to the number of heads
actually found divided by the total number of flips.
Basic Conceptsic concepts

Probability is always represented as a fraction, for example, the number of


times a "1 dot" turns up when a die is rolled (such as 1 out 6, or ) or the
number of times a head will turn up when a penny is flipped (such as 1 out of
2, or ). Thus the probability of any event always lies somewhere between 0
and 1. In this range, a probability of 0 means that there is no likelihood at all
of the given event's occurring. A probability of 1 means that the given event
is certain to occur.

Probabilities may or may not be dependent on each other. For example, we


might ask what is the probability of picking a red card OR a king from a deck
of cards. These events are independent because even if you pick a red card,
you could still pick a king.

As an example of a dependent probability (also called a conditional


probability), consider an experiment in which one is allowed to pick any ball
at random out of an urn that contains six red balls and six black balls. On the
first try, a person would have an equal probability of picking either a red or a
black ball. The number of each color is the same. But the probability of
picking either color is different on the second try, since only five balls of one
color remain.
Applications Of Probability Theorytheory

Probability theory was originally developed to help gamblers determine the


best bet to make in a given situation. Many gamblers still rely on probability
theoryeither consciously or unconsciouslyto make gambling decisions.

Probability theory today has a much broader range of applications than just
in gambling, however. For example, one of the great changes that took place
in physics during the 1920s was the realization that many events in nature
cannot be described with perfect certainty. The best one can do is to say
how likely the occurrence of a particular event might be.

When the nuclear model of the atom was first proposed, for example,
scientists felt confident that electrons traveled in very specific orbits around
the nucleus of the atom. Eventually they found that there was no basis for
this level of certainty. Instead, the best they could do was to specify the
probability that a given electron would appear in various regions of space in
the atom. If you have ever seen a picture of an atom in a science or chemistry
book, you know that the cloudlike appearance of the atom is a way of
showing the probability that electrons occur in various parts of the atom.
Part 1 (b):
Theorical Probabilities and Empirical Probabilities

Theorical Probabilities:

Probability theory is the branch ofmathemati cs concerned with


analysis
ofrandom phenomena.[1] The central objects of probability theory
are random
variables, stochastic processes, and events: mathematical
abstractions of non-
deterministic events or measured quantities that may either be
single occurrences or
evolve over time in an apparently random fashion. Although an
individual coin toss or the roll of a dice is a random event, if
repeated many times the sequence of random events will exhibit
certain statistical patterns, which can be studied and predicted.
Two representative mathematical results describing such patterns
are the law of large numbers and the central limit theorem.
As a mathematical foundation forstatistics, probability theory is
essential to many
human activities that involve quantitative analysis of large sets of
data. Methods of
probability theory also apply to descriptions of complex systems
given only partial
knowledge of their state, as instatistical mechanics. A great
discovery of twentieth
centuryphysics was the probabilistic nature of physical phenomena
at atomic scales,
described in quantum mechanics.
Empirical Probabilities

Empirical probability, also known as relative frequency,


or experimental
probability, is the ratio of the number favorable outcomes to the
total number of trials, not in a sample space but in an actual
sequence of experiments. In a more general sense, empirical
probability estimates probabilities fromexperience andobservati
on.The phrase a posteriori probability has also been used as an
alternative to empirical probability or relative frequency. This
unusual usage of the phrase is not directly related to Bayesian
inference and not to be confused with its equally occasional use to
refer to posterior probability, which is something else. In statistical
terms, the empirical probability is an estimate of a probability. If
modellingusing a binomial distribution is appropriate, it is
themaximum likelihood estimate. It isthe Bayesian estimate for the
same case if certain assumptions are made for theprior
distribution of the probability

An advantage of estimating probabilities using empirical


probabilities is that this
procedure is relatively free of assumptions. For example, consider
estimating the
probability among a population of men that they satisfy two
conditions: (i) that they are over 6 feet in height; (ii) that they
prefer strawberry jam to raspberry jam. A direct estimate could be
found by counting the number of men who satisfy both conditions
to give the empirical probability the combined condition. An
alternative estimate could be found by multiplying the proportion
of men who are over 6 feet in height with the proportion of men
who prefer strawberry jam to raspberry jam, but this estimate
relies on the assumption that the two conditions are statistically
independent.
A disadvantage in using empirical probabilities arises in estimating
probabilities which are either very close to zero, or very close to
one. In these cases very large Sample sizes would be needed in
order to estimate such probabilities to a good standard of relative
accuracy. Herestatistical models can help, depending on the
context, and in general one can hope that such models would
provide improvements in accuracy compared to empirical
probabilities, provided that the assumptions involved actually do
hold. For example, consider estimating the probability that the
lowest of the daily-maximum temperatures at a site in February in
any one year is less zero degrees Celsius. A record of such
temperatures in past years could be used to estimate this
probability. A model-based alternative would be to select of family
ofprobability distributions and fit it to the dataset contain past
yearly values: the fitted distribution would provide an alternative
estimate of the required probability. This alternative method can
provide an estimate of the probability even if all values in the
record are greater than zero.

Different between Empirical Probability & Theoretical Probability

Empirical probability is the probability a person calculates from

many different trials. For example someone can flip a coin 100

times and then record how many times it came up heads and how

many times it came up tails. The number of recorded heads

divided by 100 is the empirical probability that one gets heads.

The theoretical probability is the result that one should get if an

infinite number of trials were done. One would expect the

probability of heads to be 0.5 and the probability of tails to be 0.5

for a fair coin.


Part 3

Table 1 shows the sum of all dots on both turned-up when two dice are
tossed simultaneously.

A)Complete Table 1 by listing all possible outcomes and their


corresponding probabilities.

Sum of the dots on Possible Probability,


both turned-up Outcomes P(x)
faces(x)
2 (1,1) 1/36
3 (1,2)(2,1) 2/36
4 (1,3)(3,1)(2,2) 3/36
5 (1,4)(4,1)(2,3)(3,2) 4/36
6 (1,5)(5,1)(2,4)(4,2)(3,3) 5/36
7 (1,6)(6,1)(2,5)(5,2)(3,4)(4,3) 6/36
8 (2,6)(6,2)(3,5)(5,3)(4,4) 5/36
9 (3,6)(6,3)(4,5)(5,4) 4/36
10 (4,6)(6,4)(5,5) 3/36
11 (5,6)(6,5) 2/36
12 (6,6) 1/36

(b)Based on Table 1 that you have competed, list all the possible
outcomes of the following events and hence find their corresponding
probabilities:
A= {The two numbers are not the same}
B= {The product of the two numbers is greater than 36}
C= {Both numbers are prime or the difference between two
numbers is odd}
D={The sum of the two numbers are even and both numbers are prime}

Solution
Part 3(b)
A={ (1,2), (1,3), (1,4), (1,5), (1,6), (2,1), (2,3), (2,4), (2,5), (2,6), (3,1), (3,2),
(3,4), (3,5), (3,6), (4,1), (4,2), (4,3), (4,5), (4,6),(5,1), (5,2), (5,3), (5,4),
(5,6), (6,1), (6,2), (6,3), (6,4), (6,5)}
P(A)=??
A={(1,1), (2,2), (3,3), (4,4), (5,5), (6,6)}
P(A)=1/6
As P(A)=P(A)=1/6, thusP( A) =1- 1/6
B={},as the maximum product is 6X6=36. This event is impossible to occur.
Thus,P(B)=0
Prime number(below six):2,3,5
Odd number(below six):1,3,5

C=PUQ
C={(1,2), (1,4), (1,6), (2,1), (2,2), (2,3), (2,5), (3,2), (3,3), (3,4), (3,5), (3,6),
(4,1), (4,3), (4,5), (5,2), (5,3), (5,4), (5,5), (5,6), (6,1), (6,3), (6,5)}
=23/36

D=P R
D={ (2,2), (3,3), (3,5), (5,3), (5,5)}
P(D) =5/36

Answers:

A={ (1,2), (1,3), (1,4), (1,5), (1,6), (2,1), (2,3), (2,4), (2,5), (2,6), (3,1), (3,2),
(3,4), (3,5), (3,6), (4,1), (4,2), (4,3), (4,5), (4,6),(5,1), (5,2), (5,3), (5,4),
(5,6), (6,1), (6,2), (6,3), (6,4), (6,5)}
P(A)= 5/6

B={}
P(B)=0

C={(1,2), (1,4), (1,6), (2,1), (2,2), (2,3), (2,5), (3,2), (3,3), (3,4), (3,5), (3,6),
(4,1), (4,3), (4,5), (5,2), (5,3), (5,4), (5,5), (5,6), (6,1), (6,3), (6,5)}
P(C)= 23/36

D={ (2,2), (3,3), (3,5), (5,3), (5,5)}


P(D) =5/36

Part 4
Part 4(a)
a) Conduct an activity by tossing two dice simultaneously 50times. Observe the
sum of all dots on both turned up faces.Complete the frequency table below.

Sum of Frequen
fx
the two cy(f) f x2
numbers(
x)
2 2 4 8
3 4 12 36
4 4 16 64
5 9 45 225
6 4 24 144
7 11 77 539
8 4 32 256
9 6 54 486
10 3 30 300
11 1 11 121
12 2 24 288
total

From the table,

(i)
(ii)

(iii)

Part 4(b,c)

Sum of the
fx2
two Frequency fx
numbers(x (f)
)
2 4 8 16
3 5 15 45
4 6 24 96
5 16 80 400
6 12 72 432
7 21 147 1029
8 10 80 640
9 8 72 648
10 9 90 900
11 5 55 605
12 4 48 576
total

From the table,

(i)

(ii)
(iii)

Part 5
Part 5(a)
x 2 3 4 5 6 7 8 9 10 11 12
P(x 1/3 1/1 1/1 1/9 5/3 1/6 5/3 1/9 1/1 1/1 1/3
) 6 8 2 6 6 2 8 6
Part 5(b)
b)
Part 4 Part 5
n=50 N=100
Mean 5.58 6.91 7.00
Variance 6.0436 6.1219 5.83
Standard 2.456 2.474 2.415
Deviation

We can see that, the mean, variance and standard deviation that
we obtain through experiment in Part 4 are different but close to
the value in Part 5.

For the mean, when the number of trial increase from n=50 to
n=100, its value get closer(from 6.58 to 6.91)to the theoretical
value. This is in accordance to the Law Of Large Number. We will
discuss Law Of Large Number in next section.
Nevertheless, the empirical variance and empirical standard
deviation that we obtain in Part 4 get further from the theoretical
value in Part 5. This violates The Law Of Large Number. This is
probably due to

a) The sample(n=100) is not large enough to see the change of


value of mean,variance and standard deviation.
b) Law Of Large Number is not an absolute law.Violation of this
law is possible though the probability is relative low.

In conclusion, the empirical mean, variance and standard deviation


can be different from the theoretical value. When the number of
trial (number of sample) getting bigger, the empirical value should
get closer to the theoretical value. However, violation of this rule
is still possible,especially when the number of trial (or sample) is
not large enough.

Part 5(c)
The range mean:-

Conjecture: As the number of toss, n, increases, the mean will get closer to 7. 7
is the theoretical mean.

Image below support this conjecture where we can see that, after 500 toss, the
theoretical mean become very close to the theoretical mean, which is 3.5.
(Take note that this is experiment of tossing 1 die, but not 2 dice as what we do
in our experiment)

Average dice value again number of rolls

_______=average
_______y=3.5
mean
value

100 200 300 400 500 600 700 800 900


1000
trial

FURTHER EXPLORATION
In probability theory, the law of large numbers (LLN) is a
theorem that
describes the result of performing the same experiment a large
number of times.
According to the law, the average of the results obtained from a
large number of trials should be close to the expected value, and
will tend to become closer as more trials are performed. For
example, a single roll of a six-sided die produces one of the
numbers 1, 2, 3, 4, 5, 6, each with equalprobability. Therefore, the
expected value of a single die roll is

According to the law of large numbers, if a large number of dice


are rolled, the
average of their values (sometimes called the sample mean) is
likely to be close to 3.5, with the accuracy increasing as more dice
are rolled.
Similarly, when a fair coin is flipped once, the expected value
of the number of heads is equal to one half. Therefore, according
to the law of large numbers, the
proportion of heads in a large number of coin flips should be
roughly one half. In
particular, the proportion of heads after n flips will almost surely
converge to one half as napproaches infinity.
Though the proportion of heads (and tails) approaches half,
almost surely the
absolute (nominal) difference in the number of heads and tails will
become large as the number of flips becomes large. That is, the
probability that the absolute difference is a small number
approaches zero as number of flips becomes large. Also, almost
surely the ratio of the absolute difference to number of flips will
approach zero. Intuitively, expected absolute difference grows, but
at a slower rate than the number of flips, as the number of flips
grows.

The LLN is important because it "guarantees" stable long-


term results for random events. For example, while a casino may
lose money in a single spin of
the roulette wheel, its earnings will tend towards a predictable
percentage over a large number of spins. Any winning streak by a
player will eventually be overcome by the parameters of the game.
It is important to remember that the LLN only applies (as the name
indicates) when a large number of observations are considered.
There is no principle that a small number of observations will
converge to the expected value or that a streak of one value will
immediately be "balanced" by the others.

Você também pode gostar