Você está na página 1de 17

Estimation of Variances and Covariances

1 Variables and Distributions


Random variables are samples from a population with a given set of population parameters.
Random variables can be discrete, having a limited number of distinct possible values, or
continuous.
2 Continuous Random Variables
The cumulative distribution function of a random variable is
F(y) = Pr(Y y),
for < y < .
As y approaches , then F(y) approaches 0. As y approaches , then F(y) approaches 1.
F(y) is a nondecreasing function of y. If a < b, then F(a) < F(b).
p(y) =
F(y)
y
= F

(y), wherever the derivative exists.


_

p(y) y = 1.
F(t) =
_
t

p(y) y.
E(y) =
_

y p(y) y
E(g(y)) =
_

g(y) p(y) y
V ar(y) = E(y
2
) [E(y)]
2
.
2.1 Normal Random Variables
Random variables in animal breeding problems are typically assumed to be samples from Normal
distributions, where
p(y) = (2)
.5

1
exp(.5(y )
2

2
)
for < x < +, where
2
is the variance of y and is the expected value of y.
1
For a random vector variable, y, the multivariate normal density function is
p(y) = (2)
.5n
| V |
.5
exp(.5(y )

V
1
(y ))
denoted as y N(, V) where V is the variance-covariance matrix of y. The determinant of
V must be positive, otherwise the density function is undened.
2.2 Chi-Square Random Variables
If y N(0, I), then y

y
2
n
, where
2
n
is a central chi-square distribution with n degrees of
freedom and n is the length of the random vector variable y.
The mean is n. The variance is 2n. If s = y

y > 0, then
p(s | n) = (s)
(n/2)1
exp0.5s/[2
0.5n
(0.5n)].
If y N(, I), then y

y
2
n,
where is the noncentrality parameter which is equal to
.5

. The mean of a noncentral chi-square distribution is n + 2 and the variance is 2n + 8.


If y N(, V), then y

Qy has a noncentral chi-square distribution only if QV is idempotent,


i.e. QVQV = QV. The noncentrality parameter is = .5

QVQ and the mean and variance


of the distribution are tr(QV) + 2 and 2tr(QV) + 8, respectively.
If there are two quadratic forms of y, say y

Qy and y

Py, and both quadratic forms have


chi-square distributions, then the two quadratic forms are independent if QVP = 0.
2.3 The Wishart Distribution
The Wishart distribution is similar to a multivariate Chi-square distribution. An entire matrix
is envisioned of which the diagonals have a Chi-square distribution, and the o-diagonals have
a built-in correlation structure. The resulting matrix is positive denite. This distribution is
needed when estimating covariance matrices, such as in multiple trait models, maternal genetic
eect models, or random regression models.
2.4 The F-distribution
The F-distribution is used for hypothesis testing and is built upon two independent Chi-square
random variables. Let s
2
n
and v
2
m
with s and v being independent, then
(s/n)
(v/m)
F
n,m
.
2
The mean of the F-distribution is m/(m2). The variance is
2m
2
(n +m2)
n(m2)
2
(m4)
.
3 Expectations of Random Vectors
Let y
1
be a random vector variable, then
E(y
1
) =
1
=
_
_
_
_
_
_

11

12
.
.
.

1n
_
_
_
_
_
_
,
for a vector of length n. If c is a scalar constant, then
E(cy
1
) = c
1
.
Similarly, if C is a matrix of constants, then
E(Cy
1
) = C
1
.
Let y
2
be another random vector variable of the same length as y
1
, then
E(y
1
+y
2
) = E(y
1
) +E(y
2
)
=
1
+
2
.
4 Variance-Covariance Matrices
Let y be a random vector variable of length n, then the variance-covariance matrix of y is:
V ar(y) = E(yy

) E(y)E(y

)
=
_
_
_
_
_
_

2
y
1

y
1
y
2

y
1
y
n

y
1
y
2

2
y
2

y
2
y
n
.
.
.
.
.
.
.
.
.
.
.
.

y
1
y
n

y
2
y
n

2
y
n
_
_
_
_
_
_
= V
A variance-covariance (VCV) matrix is square, symmetric and should always be positive denite,
i.e. all of the eigenvalues must be positive.
3
Another name for VCV matrix is a dispersion matrix or (co)variance matrix.
Let C be a matrix of constants conformable for multiplication with the vector y, then
V ar(Cy) = E(Cyy

) E(Cy)E(y

)
= CE(yy

)C

CE(y)E(y

)C

= C
_
E(yy

) E(y)E(y

)
_
C

= CV ar(y)C

= CVC

.
If there are two sets of functions of y, say C
1
y and C
2
y, then
Cov(C
1
y, C
2
y) = C
1
VC

2
.
If y and z represent two dierent random vectors, possibly of dierent orders, and if the
(co)variance matrix between these two vectors is W, then
Cov(C
1
y, C
2
z) = C
1
WC

2
.
5 Quadratic Forms
Variances are estimated using sums of squares of various normally distributed variables, and
these are known as quadratic forms. The general quadratic form is
y

Qy,
where y is a random vector variable, and Q is a regulator matrix. Usually Q is a symmetric
matrix, but not necessarily positive denite.
Examples of dierent Q matrices are as follows:
1. Q = I, then y

Qy = y

y which is a total sum of squares of the elements in y.


2. Q = J(1/n), then y

Qy = y

Jy(1/n) where n is the length of y. Note that J = 11

, so
that y

Jy = (y

1)(1

y) and (1

y) is the sum of the elements in y.


3. Q = (I J(1/n)) /(n 1), then y

Qy gives the variance of the elements in y,


2
y
.
The expected value of a quadratic form is
E(y

Qy) = E(tr(y

Qy)) = E(tr(Qyy

)) = tr(QE(yy

)),
and the covariance matrix is
V ar(y) = E(yy

) E(y)E(y

)
4
so that
E(yy

) = V ar(y) +E(y)E(y

),
then
E(y

Qy) = tr(Q(V ar(y) +E(y)E(y

))).
Let V ar(y) = V and E(y) = , then
E(y

Qy) = tr(Q(V+))
= tr(QV) +tr(Q)
= tr(QV) +

Q.
The expectation of a quadratic form was derived without knowing the distribution of y. However,
the variance of a quadratic form requires that y follows a multivariate normal distribution.
Without showing the derivation, the variance of a quadratic form, assuming y has a multivariate
normal distribution, is
V ar(y

Qy) = 2tr(QVQV) + 4

QVQ.
The quadratic form, y

Qy, has a chi-square distribution if


tr(QVQV) = tr(QV), and

QVQ =

Q,
or the single condition that QV is idempotent. Then if
m = tr(QV) and = .5

Q,
the expected value of y

Qy is m+ 2 and the variance is 2m+ 8, which are the usual results


for a noncentral chi-square variable.
The covariance between two quadratic forms, say y

Qy and y

Py, is
Cov(y

Qy, y

Py) = 2tr(QVPV) + 4

QVP.
The covariance is zero if QVP = 0, then the two quadratic forms are said to be independent.
6 Basic Model for Variance Components
The general linear model is described as
y = Xb + Zu + e,
where E(y) = Xb,
E(u) = 0,
and E(e) = 0.
5
Often u is partitioned into s factors as
u

= (u

1
u

2
. . . u

s
).
The (co)variance matrices are dened as
G = V ar(u) = V ar
_
_
_
_
_
_
u
1
u
2
.
.
.
u
s
_
_
_
_
_
_
=
_
_
_
_
_
_
G
1

2
1
0 . . . 0
0 G
2

2
2
. . . 0
.
.
.
.
.
.
.
.
.
0 0 . . . G
s

2
s
_
_
_
_
_
_
and
R = V ar(e) = I
2
0
.
Then
V ar(y) = V = ZGZ

+R,
and if Z is partitioned corresponding to u, as
Z = [ Z
1
Z
2
. . . Z
s
], then
ZGZ

=
s

i=1
Z
i
G
i
Z

2
i
.
Let V
i
= Z
i
G
i
Z

i
and
V
0
= I, then
V =
s

i=o
V
i

2
i
.
6.1 Mixed Model Equations
Hendersons mixed model equations (MME) are written as
_
_
_
_
_
_
X

R
1
X X

R
1
Z
1
X

R
1
Z
2
. . . X

R
1
Z
s
Z

1
R
1
X Z

1
R
1
Z
1
+G
1
1

2
1
Z

1
R
1
Z
2
. . . Z

1
R
1
Z
s
Z

2
R
1
X Z

2
R
1
Z
1
Z

2
R
1
Z
2
+G
1
2

2
2
. . . Z

2
R
1
Z
s
.
.
.
.
.
.
.
.
.
.
.
.
Z

s
R
1
X Z

s
R
1
Z
1
Z

s
R
1
Z
2
. . . Z

s
R
1
Z
s
+G
1
s

2
s
_
_
_
_
_
_
_
_
_
_
_
_

b
u
1
u
2
.
.
.
u
s
_
_
_
_
_
_
=
_
_
_
_
_
_
X

R
1
y
Z

1
R
1
y
Z

2
R
1
y
.
.
.
Z

s
R
1
y
_
_
_
_
_
_
6
7 Unbiased Estimation of Variances
Assume that all G
i
are equal to I for this example, so that Z
i
G
i
Z

i
simplies to Z
i
Z

i
. Let
X =
_
_
_
1
1
1
_
_
_, Z
1
=
_
_
_
1 0
1 0
0 1
_
_
_,
Z
2
=
_
_
_
1 0
0 1
0 1
_
_
_, and y =
_
_
_
29
53
44
_
_
_,
Then
V
1
= Z
1
Z

1
=
_
_
_
1 1 0
1 1 0
0 0 1
_
_
_,
and
V
2
= Z
2
Z

2
=
_
_
_
1 0 0
0 1 1
0 1 1
_
_
_
and V
0
= I.
7.1 Dene the Necessary Quadratic Forms
At least three quadratic forms are needed in order to estimate the variances. Below are three
arbitrary Q-matrices that were chosen such that Q
k
X = 0. Let
Q
1
=
_
_
_
1 1 0
1 2 1
0 1 1
_
_
_,
Q
2
=
_
_
_
1 0 1
0 1 1
1 1 2
_
_
_,
and Q
3
=
_
_
_
2 1 1
1 2 1
1 1 2
_
_
_.
The numeric values of the quadratic forms are
y

Q
1
y = 657,
y

Q
2
y = 306,
and y

Q
3
y = 882.
7
For example,
y

Q
1
y =
_
29 53 44
_
_
_
_
1 1 0
1 2 1
0 1 1
_
_
_
_
_
_
29
53
44
_
_
_ = 657.
7.2 The Expectations of the Quadratic Forms
The expectations of the quadratic forms are
E(y

Q
1
y) = trQ
1
V
0

2
0
+ trQ
1
V
1

2
1
+ trQ
1
V
2

2
2
= 4
2
0
+ 2
2
1
+ 2
2
2
E(y

Q
2
y) = 4
2
0
+ 4
2
1
+ 2
2
2
,
E(y

Q
3
y) = 6
2
0
+ 4
2
1
+ 4
2
2
.
7.3 Equate Expected Values to Numerical Values
Equate the numeric values of the quadratic forms to their corresponding expected values, which
gives a system of equations to be solved, such as F = w. In this case, the equations would be
_
_
_
4 2 2
4 4 2
6 4 4
_
_
_
_
_
_

2
0

2
1

2
2
_
_
_ =
_
_
_
657.
306.
882.
_
_
_,
which gives the solution as = F
1
w, or
_
_
_

2
0

2
1

2
2
_
_
_ =
_
_
_
216.0
175.5
72.0
_
_
_.
The resulting estimates are unbiased.
7.4 Variances of Quadratic Forms
The variance of a quadratic form is
V ar(y

Qy) = 2trQVQV+ 4b

QVQXb.
Only translation invariant quadratic forms are typically considered in variance component esti-
mation, that means b

QVQXb = 0. Thus, only 2trQVQV needs to be calculated. Remem-


ber that V can be written as the sum of s + 1 matrices, V
i

2
i
, then
trQVQV = trQ
s

i=o
V
i

2
i
Q
s

j=o
V
j

2
j
8
=
s

i=o
s

j=o
tr QV
i
QV
j

2
i

2
j
For example, if s = 2, then
trQVQV = trQV
0
QV
0

4
0
+ 2trQV
0
QV
1

2
0

2
1
+ 2trQV
0
QV
2

2
0

2
2
+ trQV
1
QV
1

4
1
+ 2trQV
1
QV
2

2
1

2
2
+ trQV
2
QV
2

4
2
.
The sampling variances depend on
1. The true magnitude of the individual components,
2. The matrices Q
k
, which depend on the method of estimation and the model, and
3. The structure and amount of the data through X and Z.
For small examples, the calculation of sampling variances can be easily demonstrated. In this
case,
V ar(F
1
w) = F
1
V ar(w)F
1

,
a function of the variance-covariance matrix of the quadratic forms.
Using the small example of the previous section, the V ar(w) is a 3x3 matrix. The (1,1)
element is the variance of y

Q
1
y which is
V ar(y

Q
1
y) = 2trQ
1
VQ
1
V
= 2trQ
1
V
0
Q
1
V
0

4
0
+ 4trQ
1
V
0
Q
1
V
1

2
0

2
1
+4trQ
1
V
0
Q
1
V
2

2
0

2
2
+ 2trQ
1
V
1
Q
1
V
1

4
1
+4trQ
1
V
1
Q
1
V
2

2
1

2
2
+ 2trQ
1
V
2
Q
1
V
2

4
2
= 20
4
0
+ 16
2
0

2
1
+ 16
2
0

2
2
+ 8
4
1
+ 0
2
1

2
2
+ 8
4
2
The (1,2) element is the covariance between the rst and second quadratic forms,
Cov(y

Q
1
y, y

Q
2
y) = 2trQ
1
VQ
2
V,
and similarly for the other terms. All of the results are summarized in the table below.
Forms
4
0

2
0

2
1

2
0

2
2

4
1

2
1

2
2

4
2
V ar(w
1
) 20 16 16 8 0 8
Cov(w
1
, w
2
) 14 24 8 16 0 8
Cov(w
1
, w
3
) 24 24 24 16 0 16
V ar(w
2
) 20 48 16 32 16 8
Cov(w
2
, w
3
) 24 48 24 32 16 16
V ar(w
3
) 36 48 48 32 16 32
9
To get numeric values for these variances, the true components need to be known. Assume
that the true values are
2
0
= 250,
2
1
= 10, and
2
2
= 80, then the variance of w
1
is
V ar(w
1
) = 20(250)
2
+ 16(250)(10) + 16(250)(80)
+8(10)
2
+ 0(10)(80) + 8(80)
2
= 1, 662, 000.
The complete variance- covariance matrix of the quadratic forms is
V ar
_
_
_
w
1
w
2
w
3
_
_
_ =
_
_
_
1, 662, 000 1, 147, 800 2, 144, 000
1, 147, 800 1, 757, 200 2, 218, 400
2, 144, 000 2, 218, 400 3, 550, 800
_
_
_.
The variance-covariance matrix of the estimated variances (assuming the above true values)
would be
V ar( ) = F
1
V ar(w)F
1

=
_
_
_
405, 700 275, 700 240, 700
275, 700 280, 900 141, 950
240, 700 141, 950 293, 500
_
_
_ = C.
7.5 Variance of A Ratio of Variance Estimates
Often estimates of ratios of functions of the variances are needed for animal breeding work, such
as heritabilities, repeatabilities, and variance ratios. Let such a ratio be denoted as a/c where
a =

2
2
= (0 0 1) = 72.
and
c =

2
0
+

2
1
+

2
2
= (1 1 1) = 288.
(NOTE: the negative estimate for
2
1
was set to zero before calculating c.
From Osborne and Patterson (1952) and Rao (1968) an approximation to the variance of a
ratio is given by
V ar(a/c) = (c
2
V ar(a) +a
2
V ar(c) 2ac Cov(a, c))/c
4
.
Now note that
V ar(a) = (0 0 1)C(0 0 1)

= 293, 500,
V ar(c) = (1 1 1)C(1 1 1)

= 231, 200,
Cov(a, c, ) = (0 0 1)C(1 1 1)

= 194, 750.
Then
V ar(a/c) = [(288)
2
(293, 500) + (72)
2
(231, 200)
2(72)(288)(194, 750)]/(288)
4
= 2.53876
10
This result is very large, but could be expected from only 3 observations. Thus, (a/c) = .25
with a standard deviation of 1.5933.
Another approximation method assumes that the denominator has been estimated accurately,
so that it is considered to be a constant, such as the estimate of
2
e
. Then,
V ar(a/c)

= V ar(a)/c
2
.
For the example problem, this gives
V ar(a/c)

= 293, 500/(288)
2
= 3.53853,
which is slightly larger than the previous approximation. The second approximation would not
be suitable for a ratio of the residual variance to the variance of one of the other components.
Suppose a =
2
0
= 216, and c =
2
2
= 72, then (a/c) = 3.0, and
V ar(a/c) = [(72)
2
(405, 700) + (216)
2
(293, 500)
2(72)(216)(240, 700)]/(72)
4
= 866.3966,
with the rst method, and
V ar(a/c) = 405, 700/(72)
2
= 78.26,
with the second method. The rst method is probably more realistic in this situation, but both
are very large.
8 Useful Derivatives of Quantities
The following information is necessary for derivation of methods of variance component estima-
tion based on the multivariate normal distribution.
1. The (co)variance matrix of y is
V =
s

i=1
Z
i
G
i
Z

2
i
+R
2
0
= ZGZ

+R.
Usually, each G
i
is assumed to be I for most random factors, but for animal models G
i
might be equal to A, the additive genetic relationship matrix. Thus, G
i
does not always
have to be diagonal, and will not be an identity in animal model analyses.
11
2. The inverse of V is
V
1
= R
1
R
1
Z(Z

R
1
Z +G
1
)
1
Z

R
1
.
To prove, show that VV
1
= I. Let T = Z

R
1
Z +G
1
, then
VV
1
= (ZGZ

+R)[R
1
R
1
ZT
1
Z

R
1
]
= ZGZ

R
1
ZGZ

R
1
ZT
1
Z

R
1
+I ZT
1
Z

R
1
= I + [ZGTZGZ

R
1
Z Z](T
1
Z

R
1
)
= I + [ZG(Z

R
1
Z +G
1
) ZGZ

R
1
Z Z](T
1
Z

R
1
)
= I + [ZGZ

R
1
Z +Z ZGZ

R
1
Z Z](T
1
Z

R
1
)
= I + [0](T
1
Z

R
1
)
= I.
3. If k is a scalar constant and A is any square matrix of order m, then
| Ak | = k
m
| A | .
4. For general square matrices, say M and U, of the same order then
| MU | = | M| | U | .
5. For the general matrix below with A and D being square and non-singular (i.e. the inverse
of each exists), then

A B
Q D

=| A | | D+QA
1
B |=| D | | A+BD
1
Q | .
Then if A = I and D = I, then | I |= 1, so that
| I +QB | = | I +BQ |
= | I +B

|
= | I +Q

| .
12
6. Using the results in (4) and (5), then
| V | = | R+ZGZ

|
= | R(I +R
1
ZGZ

) |
= | R | | I +R
1
ZGZ

|
= | R | | I +Z

R
1
ZG |
= | R | | (G
1
+Z

R
1
Z)G |
= | R | | G
1
+Z

R
1
Z | | G | .
7. The mixed model coecient matrix of Henderson can be denoted by
C =
_
X

R
1
X X

R
1
Z
Z

R
1
X Z

R
1
Z +G
1
_
then the determinant of C can be derived as
| C | = | X

R
1
X |
| G
1
+Z

(R
1
R
1
X(X

R
1
X)

R
1
)Z |
= | Z

R
1
Z +G
1
|
| X

(R
1
R
1
Z(Z

R
1
Z +G
1
)
1
Z

R
1
)X | .
Now let S = R
1
R
1
X(X

R
1
X)

R
1
then
| C | = | X

R
1
X | | G
1
+Z

SZ |
= | Z

R
1
Z +G
1
| | X

V
1
X | .
8. A projection matrix, P, is dened as
P = V
1
V
1
X(X

V
1
X)

V
1
.
Properties of P:
PX = 0,
Py = V
1
(y X

b), where

b = (X

V
1
X)

V
1
y.
Therefore,
y

PZ
i
G
i
Z

i
Py = (y X

b)

V
1
Z
i
G
i
Z

i
V
1
(y X

b).
13
9. Derivative of V
1
is
V
1

2
i
= V
1
V

2
i
V
1
10. Derivative of ln | V | is
ln | V |

2
i
= tr
_
V
1
V

2
i
_
11. Derivative of P is
P

2
i
= P
V

2
i
P.
12. Derivative of V is
V

2
i
= Z
i
G
i
Z

i
.
13. Derivative of ln | X

V
1
X | is
ln | X

V
1
X |

2
i
= tr(X

V
1
X)

V
1
V

2
i
V
1
X.
9 Random Number Generators and R
R has some very good random number generators built into it. These functions are very useful
for application of Gibbs sampling methods in Bayesian estimation. Generators for the uniform
distribution, normal distribution, Chi-square distribution, and Wishart distribution are nec-
essary to have. Below are some examples of the various functions in R. The package called
MCMCpack should be obtained from the CRAN website.
14
require(MCMCpack)
# Uniform distribution generator
# num = number of variates to generate
# min = minimum number in range
# max = maximum number in range
x = runif(100,min=5,max=10)
# Normal distribution generator
# num = number of deviates to generate
# xmean = mean of the distribution you want
# xSD = standard deviation of deviates you want
w = rnorm(200,-12,16.3)
# Chi-square generator
# num = number of deviates to generate
# df = degrees of freedom
# ncp = non-centrality parameter, usually 0
w = rchisq(15,24,0)
# Inverted Wishart matrix generator
# df = degrees of freedom
# SS = matrix of sum of squares and crossproducts
U = riwish(df,SS)
# New covariance matrix is the inverse of U
V = ginv(U)
A Chi-square variate with m degrees of freedom is the sum of squares of m random normal
deviates. The random number generator, however, makes use of a gamma distribution, which
with the appropriate parameters is a Chi-square distribution.
The uniform distribution is the key distribution for all other distribution generators. R uses
the Mersenne Twister (Matsumoto and Nishimura, 1997) with a cycle time of 2
19937
1. The
Twister is based on a Mersenne prime number.
10 Positive Denite Matrices
A covariance matrix should be positive denite. To check a matrix, compute the eigenvalues
and eigenvectors of the matrix. All eigenvalues should be positive. If they are not positive, then
they can be modied, and a new covariance matrix constructed from the eigenvectors and the
modied set of eigenvalues. The procedure is shown in the following R statements.
15
# Compute eigenvalues and eigenvectors
GE = eigen(G)
nre = length(GE $values)
for(i in 1:nre) {
qp = GE$ values[i]
if(qp < 0)qp = (qp*qp)/10000
GE$ values[i] = qp }
# Re-form new matrix
Gh = GE$ vectors
GG = Gh %*% diag(GE$values) %*% t(Gh)
If the eigenvalues are all positive, then the new matrix, GG, will be the same as the input
matrix, G.
11 EXERCISES
1. This is an example of Hendersons Method 1 of unbiased estimation of variance compo-
nents. Let
y = 1 +Z
1
u
1
+Z
2
u
2
+e,
with data as follows:
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
15
42
20
36
50
17
34
23
28
31
45
37
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
=
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
1
1
1
1
1
1
1
1
1
1
1
1
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
+
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
1 0 0
0 1 0
0 0 1
1 0 0
0 1 0
0 0 1
0 1 0
0 1 0
0 0 1
0 1 0
1 0 0
1 0 0
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
u
11
u
12
u
13
_
_
_+
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
1 0 0 0
1 0 0 0
0 1 0 0
0 1 0 0
0 0 1 0
0 0 0 1
0 0 1 0
0 1 0 0
1 0 0 0
0 0 1 0
0 0 0 1
0 0 0 1
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
_
u
21
u
22
u
23
u
24
_
_
_
_
_
+e.
Also,
V = Z
1
Z

2
1
+Z
2
Z

2
2
+I
2
0
.
Calculate the following:
(a) M = 1(1

1)
1
1

16
(b) A = Z
1
(Z

1
Z
1
)
1
Z

1
(c) B = Z
2
(Z

2
Z
2
)
1
Z

2
(d) Q
0
= I M
(e) Q
1
= AM
(f) Q
2
= BM
(g) y

Q
0
y
(h) y

Q
1
y
(i) y

Q
2
y
(j) E(y

Q
0
y) = tr(Q
0
V
0
)
2
0
+tr(Q
0
V
1
)
2
1
+tr(Q
0
V
2
)
2
2
(k) E(y

Q
1
y)
(l) E(y

Q
2
y)
(m) Estimate the variances.
(n) Compute the variances of the estimated variances.
2. Check the following matrix for positive deniteness, and create a new modied matrix
from it, that is positive denite (if it is not already positive denite).
R =
_
_
_
_
_
_
_
1 2 3 4 5
2 3 1 3 4
3 1 7 3 5
4 3 3 11 2
5 4 5 2 15
_
_
_
_
_
_
_
.
17

Você também pode gostar