Escolar Documentos
Profissional Documentos
Cultura Documentos
We will look at the Gaussian distribution from a Bayesian point of view. In the standard form, the likelihood
has two parameters, the mean and the variance 2 :
1 X
1
P (x1 , x2 , , xn | , 2 ) n exp 2
(xi )2
(1)
2
Our aim is to find conjugate prior distributions for these parameters. We will investigate the hyper-parameter
(prior parameter) update relations and the problem of predicting new data from old data: P (xnew | xold ).
02
1
1
2
)
exp 2 ( 0 )
0
20
(2)
typically large
Remark 1. In practice, when little is known about , it is common to set the location hyper-parameter to
zero and the scale to some large value.
1.1
We want to put together the prior (2) and the likelihood (1) to get the posterior ( | x). For now, assume
we have only one measurement (n = 1);
There are several ways to do this:
We could multiply the two distributions directly and complete the square in the exponent.
Note that and x have a joint Gaussian distribution. Then the conditional | x is also a Gaussian for
whose parameters we know formulas:
Lemma 2. Assume (z1 , z2 ) is distributed according to a bivariate Gaussian. Then z1 | z2 is Gaussian distributed with parameters:
E(z1 | z2 ) = E(z1 ) +
Cov(z1 , z2 )
(z2 E(z2 ))
Var(z2 )
Var(z1 | z2 ) = Var(z1 )
Cov2 (z1 , z2 )
Var(z2 )
(3)
(4)
Remark 3. These formulas are extremely useful so you should memorize them. They are easily derived based
on the notion of a Schur complement of a matrix.
We apply this lemma with the correspondence: x z2 , z1
x = +
N (0, 1)
= 0 + 0
N (0, 1)
E(x) = 0
(5)
2
02
02
(6)
(7)
02
2
2
(x 0 ) = 2 0 2 x + 2
0
2
+ 0
+ 0
+ 02
MLE
Var( | x) =
2 02
=
+ 02
1
02
1
+
1
2
(8)
prior mean
= (prior + data )
(9)
1.2
2
02
x
+
0 ,
2
2 + 0
2 + 02
1
1
+ 2
02
1 !
Now look at the posterior update for multiple measurements. We could adapt our previous derivation, but
that would be tedious since we would have to use the multivariate version
of Lemma 2. Instead we will
P
reduce the problem to the univariate case, with the sample mean x
= ( xi )/n as the new variable.
2
xi | N (, 2 ) i.i.d. x
| N ,
(10)
n
1
1 X
P (x1 , x2 , , xn | ) exp 2
(xi )2
2
X
1 X 2
xi 2
xi + n2
exp 2
2
n
n
exp 2 2
x + 2 exp 2 (
x )2
2
2
P (
x | )
(11)
(12)
02
x+
2
2
n + 0
2
,
2
2 0
n + 0
1
n
+ 2
02
1 !
2
2.1
Posterior
Assuming is fixed, then the conjugate prior for 2 is an inverse Gamma distribution:
1
z
exp
z | , IG(, )
P (z | , ) =
()
z
(13)
(14)
P ( | , ) =
exp ( )
()
(15)
(16)
2.2
Prediction
We might want to compute the probability of getting some new data given old data. This can be done by
marginalizing out parameters:
Z
P (xnew | x, , , ) = P (xnew | x, , , , )P ( | x, , )d
Z
= P (xnew | , )P ( | x, , )d
Z
= P (xnew | , )P ( | post , post )d
(17)
This integral smears the Gaussian into a heavier tailed distribution, which will turn out to be a students
t-distribution:
| , Ga(, )
x | , N (, )
1 12
e
exp (x )2 d
()
2
2
Z
2
1
1
=
(+ 2 )1 e (+(x) )/2 d
() 2
Z
P (x | , , ) =
=
() 2 + 1 (x )2 + 12
2
+ 12
1
1
=
1
+ 21
() (2) 2
1
1 + 2
(x )2
(18)
Remark 10. The student-t density has three parameters: , , and is symmetric around . When
is an
integer or a half-integer we get simplifications using the formulas (k + 1) = k(k) and (1/2) =
The following is another useful parametrization for the students t-distribution:
p = 2
p+1
2
P (x | , p, ) =
p2
=
12
1
1+
p (x
)2
p+1
2
(19)
Now, we want to put a prior on and 2 together. We could simply multiply the prior densities we obtained
in the previous two sections, implicitly assuming and 2 are independent. Unfortunately, if we did that,
we would not get a conjugate prior. One way to see this is that if we believe that our data is generated
according to the graphical model in Figure 1, we find that, conditioned on x, the two parameters and 2
are, in fact, dependent and this should be expressed by a conjugate prior.
02
2
x
i.i.d.
| N (0 , n0 )
Ga(, )
3.1
Posterior
(20)
Next, look at | x. We get this by expressing the joint density P (, | x) and marginalizing out :
P (, | x) P ( ) P ( | ) P (x | , )
n
X
0
1 e 1/2 exp
( 0 )2 n/2 exp
(xi )2
2
2
trick: xi x
+x
n
1X
+ 2 1 exp +
(xi x
)2
1/2 exp (n0 ( 0 )2 + n(
x )2 )
2
2
(21)
(22)
(23)
i.i.d.
| N (0 , n0 )
Ga(, )
Then the posterior is:
n
n0
| , x N
x
+
0 , n + n0
n + n0
n + n0
n
1X
nn0
2
2
| x Ga +
, +
(xi x
) +
(
x 0 )
2
2
2(n + n0 )
3.2
Prediction
Z Z
P (xnew | x) =
P (xnew | x) = student-t
xnew | x
xnew | ,x