Você está na página 1de 12

Chapter

2
Sta+s+cal Inference
Outline
Introduc)on
Basic Idea
Sampling Distribu)on
The Null Hypothesis
Point Es)ma)on and Interval
Es)ma)on
Proper)es of Es)mators
Methods of Es)ma)on
Important Ideas Underlying Sta)s)cal
Inference
Basic Idea
Sample value or measurable characteris)cs of a sample is called as
sta$s$cs (e.g., sample mean or X
)
Popula)on value or measurable characteris)cs of popula)on is called
as parameter, or popula)on parameter (e.g., popula)on mean or mu
or )
Parameter values are unknown, popula)on level and can only be
es)mated.
Sample values are known, sample level and are calculated.
Sample values change with each sample and so sample values are
random variable.
Sample values are es)mators of popula)on values.
Process of es)ma)ng popula)on value is called as sta)s)cal inference.
Sampling Distribu+on
Sample sta)s)cs is a random variable.
The distribu)on that sample sta)s)cs follow is sampling
distribu)on.
For example, the popula)on mean is mu and sample sta)s)cs is X
bar. If we draw large number of samples with replacement from
this popula)on and obtain those many X bars, the distribu)on
followed by X bars is called as sampling distribu)on.
The mean of such a distribu)on is mu and the standard devia)on
of this distribu)on is standard error of mean.
In general, the standard devia)on of sampling distribu)on is
known as the standard error of sta$s$cs.
The Null Hypothesis
The null hypothesis is a sta)s)cal hypothesis ( H0: say H
naught or H zero or H null) that is tested for possible
rejec)on under the assump)on that it is true.
Null usually implies that observa)ons are the result of chance.
Null sta)ng that popula)on mean is zero and alterna)ve
sta)ng that popula)on mean is not zero:
H0: = 0
HA: 0

Fishers view about Null Hypothesis


NymanPearson view on Null Hypothesis
The Null Hypothesis Tes+ng and Sampling
Distribu+on
Working out sampling distribu)on of the sta)s)cs under null.
The standard error of sta)s)cs is the standard division of sampling
distribu)on. is not known in x , and hence S x is used
x
n n

When it is known, the resul)ng random variable follows Z distribu)on.


For example, the sample of size 50. Mean is 115.
H0: = 110 152 15
HA: 110 X
= = = 2.12
50 50
X 115 110
Z= = = 2.36
2.12

The probability of Z. 2.36 is 0092 which is considered as a small


probability
Decision
The region of acceptance is decided by the cri)cal value or alpha that is
predecided.
Generally alpha = 0.05, which is also known as the level of signicance.
The probability that the obtained results are just by chance and not
because of the systema)c varia)on.
Errors are possible when we accept or reject the null.
Levels of signicance are not to be used rigidly.
! Reality(about( H 0 in(the(
population((
H 0 is(True( H 0 is(False((
Decision(about( Do(not( Correct! Type!II!Error!
null(using(sample(( Reject( H 0 ( Decision! p=!
(1 )!
Reject( H 0 ( Type!I!Error! Correct!
p= ! Decision!
(1 )!
!
Steps in Sta+s+cal Hypothesis
Tes+ng
Start with theore)cal hypothesis.
Specify a sta)s)cal (probabilis)c) form of the
psychological theory.
Collect the data on sample.
State null and specify the alpha level (Type I error).
Work out the sampling distribu)on of the sta)s)cs used
for es)ma)ng the parameter.
Test the null hypothesis at level alpha and make
decision.
Es+ma+on Theory
Point Es+ma+on and Interval Es+ma+on
Parameter is theta and es)mator is theta hat
Point Es+ma+on: Es)ma)ng a specic value of the
parameter.
For example, es)ma)ng popula)on mean from sample
mean.
Interval Es+ma+on: Es)ma)ng two values between
which the es)mator would fall.
For example, 95 percent condence interval for the mean.
Proper+es of Es+mators
Small Sample Proper+es
Unbiasedness
Minimum Variance
Ecient Es)mator
Linearity
BLUE
Mean Square Error (MSE)
Large Sample Proper+es
Asympto)c Unbiasedness
Consistency
Asympto)c Eciency
Asympto)c Normality
Suciency
Methods of Es+ma+on
Method of Moments
Method of Least Squares
Method of Maximum Likelihood (ML)
The Wald, Likelihood Ra+o and Lagrangian Mul+plier Test
Addi)onal Topics:
(i) Bayes Rule and Bayesian Inference;
(ii) Bootstrap
Other Important Ideas:
(i) Central Limit Theorem;
(ii) Law of Large Numbers;
(iii) CremerRao Inequality;
(iv) RaoBlackwell Theorem.

Você também pode gostar