Você está na página 1de 9

Debre Berhan University

College of Natural and Computational Science


Department of statistics

SEMINAR IN CONCEPT OF BAYESIAN INFERENCES


BAYESIAN POINT ESTIMATION AND ASYMPTOTIC PROPERTIES OF ESTIMATORS

Master of Biostatistics

By
Wudneh ketema
Id no 005/08

Submitted to: Dr.A.R.Muralidharan

December 6, 2016
Debre Berhan, Ethiopia

Bayesian Seminar Report

CONTENTS

Table of Contents
Contents
Table of Content

1 INTRODUCTION
1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1
1

2 Kinds of inference problems


2.1 Point estimators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
2.2 Hypothesis testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1
1
2

3 Bayesian point-estimation

4 Asymptotic properties of point-estimation


4.1 Finite-sample properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.2 large-sample properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

2
2
3

5 Conclusion

Wudneh K.

Bayesian Seminar Report

2 KINDS OF INFERENCE PROBLEMS

INTRODUCTION
1.1

Background

Suppose X1 , ..., Xn are iid with PDF/PMF f (x). The point estimation problem seeks to find a
called an estimator, depending on the values of X1 , ..., Xn , which is a good guess, or
quantity ,
estimate, of the unknown . The choice of depends, not only on the data,but on the assumed
model and the definition of good. Initially, it seems obvious what should be meant by good
that is close to ,but as soon as one remembers that itself is unknown, the question of what
it means even for to be close to becomes uncertain.
It would be quite unreasonable to believe that one point estimate hits the unknown on
the nose. Indeed, one should expect that will miss by some positive amount.Therefore, in
it would be helpful to have some estimate of the amount by
addition to a point estimate ,
which will miss . The necessary information is encoded in the sampling distribution of
or with a
and we can summarize this by say, reporting the standard error or variance of ,
confidence interval.

Kinds of inference problems

2.1

Point estimators

In statistics, point estimation involves the use of sample data to calculate a single value (known
as a statistic) which is to serve as a best guess or best estimate of an unknown (fixed or random)
population parameter.
More formally, it is the application of a point estimator to the data.
In general, point estimation should be contrasted with interval estimation such interval estimates are typically either confidence intervals in the case of frequentist inference inference, or
credible intervals in the case of Bayesian inference.Point estimation depends on:
minimum-variance mean-unbiased estimator (MVUE)
minimizes the risk (expected loss) of the squared-error loss-function.
best linear unbiased estimator (BLUE)
minimum mean squared error (MMSE)
median-unbiased estimator, minimizes the risk of the absolute-error loss function
maximum likelihood (ML)
method of moments, generalized method of moments

Wudneh K.

Bayesian Seminar Report

2.2

4 ASYMPTOTIC PROPERTIES OF POINT-ESTIMATION

Hypothesis testing

Unlike the point estimation problem which starts with a vague question like what is ?,the
hypothesis testing problem starts with a specific question like is equal to 0 ?,where 0 is some
specified value. The popular t-test problem is one in this general class. In this case,notice that
the goal is somewhat different than that of estimating an unknown parameter.The general set
up is to construct a decision rule, depending on data, by which one can decide if = 0 or
6= 0 . Often times this rule consists of taking a point estimate and comparing it with the
hypothesized value 0 if is too far from 0 conclude 6= 0 ,otherwise, conclude = 0 .

Bayesian point-estimation

Bayesian inference is typically based on the posterior distribution. Many Bayesian pointestimators are the posterior distributions statistics of central tendency, e.g., its mean, median,
or mode Posterior mean, which minimizes the (posterior) risk (expected loss) for a squared-error
loss function in Bayesian estimation, the risk is defined in terms of the posterior distribution,
as observed by Gauss. Posterior median, which minimizes the posterior risk for the absolutevalue loss function, as observed by Laplace. maximum a posteriori (MAP), which finds a
maximum of the posterior distribution; for a uniform prior probability, the MAP estimator
coincides with the maximum-likelihood estimator, The MAP estimator has good asymptotic
properties, even for many difficult problems, on which the maximum-likelihood estimator has
difficulties. For regular problems, where the maximum-likelihood estimator is consistent, the
maximum-likelihood estimator ultimately agrees with the MAP estimator. Bayesian estimators
are admissible by Walds theorem.

Asymptotic properties of point-estimation


finite-sample properties
large-sample properties

4.1

Finite-sample properties

How an estimator performs for finite number of observations n.


Estimator: W
Parameter: Criteria for evaluating estimators
Bias: does EW = ?
Variance of W (you would like an estimator with a smaller variance)
2
2
Example:
P X1 , ..., Xn i.i.d.(, ) Unknown parameters are and Consider:
1

= n xi is estimator of
P
2
2
n2 = 12 (Xi x
n ) is estimator of

Wudneh K.

Bayesian Seminar Report

4 ASYMPTOTIC PROPERTIES OF POINT-ESTIMATION

1. Bias E n = n1 .n = ,so So unbiased


2. var n =

1
n 2
n2

= n1 2

Mean-squared error(MSE) of W is E(W )2 Common criterion for comparing estimators.


Decompose: M SE(W ) = V W + (EW )2 = V ariance + (Bias)2
Hence, for an unbiased estimator: M SE(W ) = V W Best unbiased estimator: if you choose
the best (in terms of MSE) estimator, and restrict yourself to unbiased estimators, then the
best estimator is the one with the lowest variance.A best unbiased estimator is also called the
Uniform minimum variance unbiased estimator (UMVUE)
Formally: an estimator W is a UMVUE of satisfies:
(i) E W = , for all (unbiasedness)
, for all and all other unbiased estimators W
(ii) V W V W

4.2

large-sample properties

It can be difficult to compute MSE, risk functions, etc. for some estimators,especially when
estimator does not resemble a sample average.
Large-sample properties: exploit LLN, CLT
we say that Wn is consistent for a parameter iff the random sequence Wn converges (in some
stochastic sense) to . Strong consistency obtains when Wn as Weak consistency obtains
when Wn p
Consistency
An estimator is consistent if the estimate it constructs is guaranteed to converge to the true
parameter value as the quantity of data.Consistency is nearly always a desirable property for
a statistical estimator. now from this p

Wudneh K.

Bayesian Seminar Report

4 ASYMPTOTIC PROPERTIES OF POINT-ESTIMATION

Variance (and efficiency) It is useful to quantify this notion of reliability using a natural
. All else being equal, an estimator
statistical metric: the variance of the estimator, V ar()
with smaller variance is preferable to one with greater variance. This idea, combined with abit
more simple algebra, quantitatively explains the intuition that more data is better for Estimator 1
= var( m )
var()
n
= n12 var(m)
Since m is binomially distributed, and the variance of the binomial distribution is n(1 )
so we have
var( = (1)
n
So variance is inversely proportional to the sample size n, which means that relative frequency
estimation is more reliable when used with larger samples.

Wudneh K.

Bayesian Seminar Report

4 ASYMPTOTIC PROPERTIES OF POINT-ESTIMATION

Maximum Likelihood Estimation(MLE)


The Principle of MLE:
Choose parameters that maximize the likelihood function This is one of the most commonly
used estimators in statistics With MLE,we seek a point value for ,which maximizes the
likelihood,p(D|) ,We can denote this value as In MLE, is a point estimate, not a random
variable.Maximum Likelihood relies on this relationship to conclude that if one model has a
higher likelihood, then it should also have a higher posterior probability. Therefor the model
with the highest likelihood should also have the highest posterior probability.Many common
statistics, such as the mean as the estimate of the peak of a normal distribution are really
Maximum Likelihood conclusions.
Sufficiency
Let X1 , X2 , ..., Xn be a random sample from a probability distribution with unknown parameter
. Then, the statistic:
y = u(x1 , x2 , ......, xn )
is said to be sufficient for if the conditional distribution of X1 , X2 , ..., Xn given the statistic
Y, does not depend on the parameter .

Wudneh K.

Bayesian Seminar Report

5 CONCLUSION

Conclusion

We conclude from the above explanation there are two kinds of inference problem these are
point estimator and hypothesis testing problem . Point estimator is a type of estimator which
is used to find the point value of an estimator .Hypothesis testing is a statistical test which
is used to make decsion of some thing and there are different asymptotic properties of point
estimator , these are unbiased,consistency,efficiency ,sufficiency and MLE.

Wudneh K.

Bayesian Seminar Report

REFERENCES

References
[1] Kim, Jae-Young (2010): Large Sample Properties of Posterior Densities, Bayesian Information Criterion and the Likelihood Principle in Nonstationary Time Series Models.
[2] Koop, Gary (2009): Bayesian Econometrics, John Wiley Sons).
[3] Greenberg, Edward (2012): Introduction to Bayesian econometrics, Cambridge University
Press.

Wudneh K.

Você também pode gostar