Você está na página 1de 8

MATH 181

1st SEMESTER/AY 2018-2019


Lecturer: Eleanor Gemida

Frequently used continuous random variables

1. Uniform Distribution

We write X ~ U (a, b).


1
PDF: f ( x) = , a  x  b,0 otherwise
b−a
 0 if x  a
x x−a

CDF: F ( x) = f ( y )dy = 
 b−a
if a  x  b
−
 1 if x  b

a+b
E( X ) =
2
b2 − a 2
Var ( X ) =
12
MGF:
ebt − eat
M X (t ) = , for any real number t
(b − a)t

Example
The future lifetime (in years) of a
newborn is uniformly distributed over
the interval (0,100).

What is the probability that the newborn


will die between ages 80 and 90?

Given that the newborn survived to age


x, find the distribution of the future
lifetime of the person aged x.
2. Exponential Distribution

We write X ~ Exp( ).

PDF: f ( x) = e− x , x  0,   0

CDF: F ( x) = 1 − e− x

1
E( X ) =

1
Var ( X ) =
2
1 1
MGF: M X (t ) = , t
1 − t 

Example
The future lifetime (in years) of
a newborn is exponentially
distributed with mean 𝜆.

Given that the newborn


survived to age x, find the
distribution of the future
lifetime of the person aged x.
What do you notice?
3. Gamma Distribution We call this integral the
gamma function.
x −1e− x /  , x  0.
Consider number of 1
We write the PDF as f ( x) =

independent, identically ( ) 
distributed random variables
✓ The gamma function is a
X1, X 2 ,..., X . generalization of the
Suppose their common factorial.
distribution is exponential ✓ If  is an integer, then ( ) = ( − 1)!
with mean 𝛽. Let Exercise!
✓ ( + 1) =  ( ),   0
S = X1 + X 2 + ... + X  . Exercise!
That is, for all i=1, 2, …, ,
Moments and MGF:
X i Exp(1/  ),
E ( S ) = 
f X i ( x) =  −1e− x /  .
Var ( S ) =  2
M S (t ) = (1 −  t ) − .
The above results can be
generalized to the case when
> 0, not necessarily an
integer. The distribution of S is Example
known as the gamma In the Razon household, Mr.
distribution. and Mrs. Razon both have jobs.
The income of Mr. Razon has a
✓ That is, the sum of gamma distribution of mean 3
independent, identically and variance 3.
distributed exponential The income of Mrs. Razon has a
random variables is gamma gamma distribution of mean 2
distributed. and variance 2. The two
incomes are independent.
We write X ~ Gam( ,  ). Calculate The probability that
PDF: the income of Mr. Razon is less
f ( x) =
1  −1 − x / 
x e , x  0, c is a constant than that of Mrs. Razon’s. (Set-
c  up only.)
+
 −1 −t
The constant c is c = t e dt. EXERCISE!
0
✓ If  = 1, then X ~ Pois(1/  ).

+
 −1 −t
Let c = t e dt = ( ).
0
4. Beta Distribution

Suppose U ~ Gam(1,1), V ~ Gam( 2 ,1),


U and V are independent.
We are interested on the
distribution of the proportion
U
B= .
U +V

The distribution of B is called


the Beta distribution.

We write X ~ B(1,  2 ).
PDF:
(1 +  2 ) 1 −1
f ( x) = x (1 − x) 2 −1, 0  x  1
(1 )( 2 )
EXERCISE!

1
E( X ) =
1 +  2
1 2
Var ( X ) =
(1 +  2 )2 (1 +  2 + 1)

Example
From the previous problem on
the income of Mr. and Mrs.
Razon, calculate the expected
value and variance of the ratio
of income of Mr. Razon to the
total income of the couple.
5. Normal Distribution Recall the transformation discussed
in the previous chapter as follows:
If X ~ N (  ,  2 ), then
We write X ~ N (  ,  2 ).
X −
Parameters:  ,  2 Z= is standard normal.

This transformation from normal to
PDF: standard normal is helpful specially
1  ( x −  )2  if we are computing probabilities
f ( x) = exp −  , −  x  + such as P(X ≤ a). There is no closed
 2  2 2 
form for the CDF of X. To avoid the
CDF:
complication of performing the
x x
1  
 ( y −  )2  integral, we use the transformation
F ( x) =  f ( y )dy =   2
exp −

 2 2  dy


and use the standard distribution
− −
There is no closed form for the CDF. table.

Using change of variables x=y + 𝜇, Recall the following notes on


the moment generating function is reading the standard table for z ≥ 0:
computed and obtain 1. P(Z ≤ z) is the area under the
graph of the pdf from -∞ to z

  2 2 
M X (t ) = exp  t + t  , for any real number t. 2. The graph of the PDF is

 2   symmetric about the origin.
❖ The derivation is left as an exercise. 3. P(Z > z) = 1 - P(Z ≤ z) is the
area under the graph of the
Using the MGF, the moments are PDF from z to + ∞
easily computed.
E( X ) =  Var ( X ) =  2 Example
Suppose X~N(2,4). Calculate
✓ It is easy to verify using the
MGF that the sum of two 1. P(X ≤ 1.16)
independent normal 2. P(X > -1)
variables is also normal with 3. P(0≤ X ≤ 0.5)
mean equal to the sum of
the means and variance
equal to the sum of the
variances.

Standard Normal
Distribution
When a random variable following a
normal distribution has mean 0 and
variance 1, we call it the STANDARD
NORMAL DISTRIBUTION, denoted
by Z. Hence, the PDF of Z is given by

1 
 z2 

f ( z) = exp −  , −  z  +.
2 
 2 
Normal Approximations
The central limit theorem Continuity Correction
Let S be a sum of n independent,
Let X1, X 2 ,..., X n be identically
identically distributed discrete
distributed and independent random variables with mean 𝜇 and
random variables with mean μ and
variance 𝜎 2 . Then the normal
variance σ2. Then, as n becomes large
approximation is given by
X1 + X 2 + ... + X n ~ N (n , n 2 ).
In other words,  k + 0.5 − n 
X1 + X 2 + ... + X n − n P( S  k ) = P  Z  .
~ Z.  
 n 2

n 2

One of the applications of Central


This is used when a discrete random
limit theorem is normal
variable is approximated by a
approximation.
normal variable.

Example Example
Suppose that there are 100 From the previous example,
independent and identically approximate P(S ≤ 3) using the
distributed random variables continuity correction.
uniformly distributed over (1,4).

Approximate the probability that


the sum is at least 264.25.

Other Approximation
Theorems
1. Poisson Approximation of
the Binomial
Example Let X1, X2, ... be independent
Suppose there are 9 independent Binomial random variables
Bernoulli random variables with with parameters n and p.
parameter 0.5. Let S= X1+X2+…+ Suppose that limn→∞np = λ.
X9. Then Xn converges in
Compare the exact calculation of distribution to a Poisson
P(S ≤ 3) with the approximation random variable with mean λ.
using central limit theorem. 2. Law of Large Numbers
Let X1, X2, ... , Xn be an
independent and identically
distributed random
variables with mean μ and
variance σ2. Let Sn =
X1+X2+∙∙∙+Xn. Then,
Sn
→  as n → +.
n
6. Chi-square Normal vs Lognormal
Distribution • The graph of the normal
density is symmetric, the
Let Z1, Z2 ,..., Zn be standard normal variables. lognormal is right skewed.
• The normal random
Let X = Z12 +Z22 + ... + Zn 2 . variable admits negative
The distribution of X is called the values, lognormal admits
Chi-square distribution with r only positive values.
degrees of freedom. ✓ Lognormal can be used in
modelling heavy losses. It is
That is, a Chi-square random also a basis for options
variable with r degrees of freedom pricing.
is a sum of squares of r independent
standard normal variables. We write X ~ Lognorm(  ,  ).
This means that Y = ln X ~ N(  ,  2 ).
The Chi-square distribution is
widely-used in statiscal analysis. It is CDF: F ( x) = P( X  x)
used  ln x −  
= P(Y  ln x) = P  Z 
• To develop hypothesis test   
and confidence intervals for
a population standard We use the moment-generating
deviation. function of Y in order to obtain the
• To test for independence. moments of X.
• In chi-square goodness of fit
test to test the distribution Recall: If Y~N(μ,σ2), then
of a categorical data set.   2 2 
( ) 
MY (t ) = E etY = exp  t +

 2
t .


We write X ~  2 (r ).
− r /2
MGF: M X (t ) = (1 − 2t ) , t  1/ 2 Hence,
  2 
E( X ) = r Var ( X ) = 2r ( )
E ( X ) = E eY = M Y (1) = exp   +
 2 

( ) ( ) 
E X 2 = E e2Y = M Y (2) = exp 2  + 2 2 
7. Lognormal Distribution
  
Var ( X ) = exp 2 + 2 2 − exp 2 +  2 
A positive random variable X is said
to have a lognormal distribution
with parameters μ and σ if Y = lnX
has a normal distribution with
mean μ and variance 𝝈𝟐 .
8. Pareto Distribution For the 2-parameter Weibull
Like the lognormal distribution, distribution, we assume that the
the Pareto distribution is heavy- failure rate is proportional to a
tailed, i.e., a considerable power of t:
probability is assigned to large  (t ) =  t  −1, t  0.
values. It used in modelling
heavy losses such as in fire With this assumption of the failure
insurance, the distribution of rate, we obtain the following
wealth in a society, etc. properties of the 2-parameter
Weibull distribution.
We write X ~ Par ( ,  ).
  We write X ~ WB( ,  ).
PDF: f ( x) = ,x 0
 +1
(x + ) PDF: f ( x) =  x  −1e− x , x  0

   
CDF: F ( x) = 1 −  
 y +  CDF: F ( x) = 1 − e− x
 Note: There are other forms of the
E( X ) = Weibull distribution available in
 −1
 2 literature.
Var ( X ) =
( − 2)( − 1)2
The Weibull is distribution is mostly
used in
• Data analysis
• Reliability analysis.
In fact, it was first used to model
9. Weibull Distribution breaking strength of materials.

Recall the CDF of an exponential


random variable with parameter 𝜆:
10.Cauchy Distribution
The Cauchy distribution is an
− x example of a distribution which has
CDF: F ( x) = 1 − e
NO mean, variance or
We can rewrite this CDF as
higher moments defined.
 x 
 
 0

F ( x) = 1 − exp -  (t ) dt 
 We write X ~ Cauchy(  ),   0.
where 𝜆(t) = 𝜆, a constant. 
PDF: f ( x) = ,x
This function 𝜆(t), in general, is
 ( 2 + x2 )
called the HAZARD RATE or − t
MGF: M (t ) = e
FAILURE RATE. We can interpret it
as the rate at which defects
(failures) happen.

For the case of the exponential


random variable, “failures” arrive
at the constant rate 𝜆.

Você também pode gostar