Escolar Documentos
Profissional Documentos
Cultura Documentos
conditions
Why random assignment and control groups?
Random assignment helps with internal validity Some threats to internal validity: Experimenter/Subject expectation Mortality bias
Is there an attrition bias such that subjects later in the research
naturally?
Classic experimental design when done properly can help guard
R R
X O1 O2
largely equivalent such that we can assume the differences seen may be largely due to the treatment
post test situation in which the between groups factor includes a control and treatment condition Including a pretest allows:
A check on randomness Added statistical control Examination of within-subject change
2 ways to determine treatment effectiveness Overall treatment effect and in terms of change
Pre-test/Post-test
Random assignment
Observation for the two groups at time 1 Introduction of the treatment for the experimental group Observation of the two groups at time 2 Note change for the two groups
Mixed design
2x2
treatment treatment treatment treatment treatment control control control control control
Pre 20 10 60 20 10 50 10 40 20 10
Post 70 50 90 60 50 20 10 30 50 10
SPSS output
Tests of Within-Subj ects Effects Measure: MEASURE_1 Source prepost prepost * treat Error(prepost) Type III Sum of Squares 1805.000 2205.000 1040.000 df 1 1 8 Mean Square 1805.000 2205.000 130.000 F 13.885 16.962 Sig. .006 .003 Partial Eta Squared .634 .680 Sphericity Assumed Sphericity Assumed Sphericity Assumed
Tests of Between-Subj ects Effects Measure: MEASURE_1 Transformed Variable: Average Source treat Error Type III Sum of Squares 1805.000 4240.000 df 1 8 Mean Square 1805.000 530.000 F 3.406 Sig. .102 Partial Eta Squared .299
Why are we not worried about sphericity here? No main effect for treatment (though close with
noticeable effect) Main effect for prepost (often not surprising) Interaction
Interaction
The interaction suggests
Estimated Marginal Means of MEASURE_1
that those in the treatment are benefiting from it while those in the control are not improving due to the lack of the treatment
70
60
50
40
30
20 Pre Post
factor1
interest, in this situation we could have provided those results with a simpler analysis Essentially the question regards the differences among treatment groups regarding the change from time 1 to time 2. t-test on the gain (difference) scores from pre to post
t2 = F
Independent Samples Test Levene's Test for Equality of Variances t-test for Equality of Means 95% Confidence Interval of the Difference Lower Upper -65.51672 -18.48328
Sig. .172
t -4.118
df 8
way Analysis of covariance would provide a description of differences among treatment groups at post while controlling for individual differences at pre* Note how our research question now shifts to one in which our emphasis is in differences at time 2, rather than describing differences in the change from time1 to time 2
Instrumentation change Variables are not measured in the same way in the before and after studies. A common way for this to occur is when the observer/raters, through experience, become more adept at measurement. History (intervening events) Events not part of the study intervene between the before and after studies and have an effect Maturation Invalid inferences may be made when the maturation of the subjects between the before and after studies has an effect (ex., the effect of experience), but maturation has not been included as an explicit variable in the study. Regression toward the mean If subjects are chosen because they are above or below the mean, one would expect they will be closer to the mean on re-measurement, regardless of the intervention. For instance, if subjects are sorted by skill and then administered a skill test, the high and low skill groups will probably be closer to the mean than expected. Test experience The before study impacts the after study in its own right, or multiple
Pre-test sensitization
So what if exposure to the pretest automatically
influences posttest results in terms of how well the treatment will have its effect? Example:
Attitudes about human rights violations after exposure to a
effects of a pretest
create a threat to construct validity. Combining the two basic designs creates the Solomon 4group design, which can determine if pretest sensitization is a problem:
R R
O O
If these two groups are different, pretest sensitization is an issue. Pre X Treatment interaction If these two groups are different, there is a testing effect in general.
R O
R O
O
O
test results
Treat all four groups as part of a 4 level factor Contrast treatment groups vs. non
sense of change/gain
interaction?
Significant interaction
Simple effects
Test B & C: simple effects
B: Treatment vs Control at
to pretest
Simple effects
However, could there be a treatment effect in
an enhancement of the treatment Ex. Kaplan/Princeton Review class helps in addition to the effect of having taken the GRE before
If the other simple effect test C is significant
also (still assuming sig interaction) we could conclude that was the case
Non-significant interaction
If there is no interaction to begin with, check the
action, and if not sig may not be indicative of no treatment effect because we would be disregarding the pre data (less power)
that takes into account differences among individuals at pretest (Test E) T-test on gain/difference scores (Test F) Or mixed design (Test G)
Between groups factor of Treatment Within groups factor of Pre-Post
identical to one another However test E will more likely have additional power
Ancova
We can interpret the ANCOVA as allowing for
a test of the treatment after posttest scores have been adjusted for the pretest scores Basically boils down to:
What difference at post would we see if the
In SPSS
The ancova (or
other tests) will only concern groups one and two as they are the only ones w/ pre-tests to serve as a covariate or produce difference scores for the mixed design/t-test approach
treatment to still have an effect, we can conclude that the treatment has some utility beyond whatever effects the pre-test has on the post-test If that test is not significant however, we may perform yet another test
Test H
3 and 4 (O5 vs.O6) Less power compared to others (only half the data and no pre info) but if it is significant despite the lack of power we can assume some treatment effect
Meta-analysis
Even if this test is not significant, Braver & Braver
(1988) suggest a meta-analytic technique that combines the results of the previous two tests (test E, F or G and that of H)
Note how each is done only with a portion of the data
More power from a consideration of all the data
a one-tailed z-score, add the two z-scores and divide by 2 (i.e. the number of z-scores involved) to give
zmeta
treatment effect
Nowadays might want to use effect size r or d for the meta-
analysis (see Hunter and Schmidt) as there are obvious issues in using p-values One might also just examine the Cohens d for each (without analysis) and draw a conclusion from that
significance we are forcing an inclusion criterion for the meta (selection bias)
Problems
Braver and Braver acknowledge that the meta-
analytic technique should be conducted regardless of the outcomes of the previous tests
If test A & D nonsig, do all steps on the right side
had a slightly negative correlation b/t pre and post for one setup, and an almost negligible positive corr in the other, and only one mean was significantly different from the others
Probably not a likely scenario
approach has been shown to be useful in the applied setting, but there still may be concerns regarding type I error rate Gist: be cautious in interpretation, but feel free to use if suspect pre-test effects
MCs summary/take
1. Do all the tests on the right side if test A
and D nonsig
If there is a treatment effect but not a pretest effect,
the meta-analysis is more powerful for moderate and large sample sizes
With small sample sizes the classical ANCOVA is slightly
more powerful
As the ANCOVA makes use of pretest scores, it is noticeably more powerful than the meta-analysis, whereas the t test is only slightly more powerful than the meta-analysis.
effectiveness of the treatment, the ANCOVA or t test is typically more powerful than the meta-analysis.
2. Perhaps apply an FDR correction to the
analyses conducted on the right side to control for type I error rate 3. Focus on effect size to aid your
Reliability
What is reliability? Often thought of as consistency, but this is more of a
by-product of reliability
Not to mention that you could have perfectly consistent scores
lacking variability (i.e. constants) for which one could not obtain measures of reliability
capture an individuals true score, to distinguish accurately one person from another on some measure
Obsvar
If observed variance goes up, power will decrease However if observed variance goes up, we dont know automatically
variance, reliability will increase Reliability goes up, Power goes down
The point is that psychometric properties of the variables play an
important, and not altogether obvious role in how we will interpret results, and not having a reliable measure is a recipe for disaster.
Error in Anova
Typical breakdown in a between groups
design SStot = SSb/t + SSe Variation due to treatment and random variation (error) The F statistic is a ratio of these variances F = MSb/MSe
Error in Anova
Classical True Score Theory Each subjects score = true score + error of measurement MSe can thus be further partitioned Variation due to true differences on scores between subjects and error of measurement (unreliability) MSe = MSer + MSes MSer regards measurement error MSes systematic differences between
individuals
Error in Anova
The reliability of the measure will determine
the extent to which the two sources of variability (MSer or MSes) contribute to the overall MSe If Reliability = 1.00, MSer = 0
Error term is a reflection only of systematic
individual differences
If Reliability = 0.00, MSes = 0 Error term is a reflection of measurement error only MSer = (1-Rel)MSe MSes = (Rel)MSe
df (n 1, n 1)
attributable to systematic individual differences However with strong main effects/interactions, we might see sig F for this test even though the contribution to model is not very much Calculate an effect size (eta-squared)
SSes/SStotal Lyons and Howard suggest (based on Cohens rules of thumb)
that < .33 would suggest that further investigation may not be necessary
Summary
Gist: discerning the true nature of treatment effects,
e.g. for clinical outcomes, is not easy, and not accomplished just because one has done an experiment and seen a statistically significant effect Small though significant effects with not so reliable measures would not be reason to go with any particular treatment as most of the variance is due poor measures and subjects that do not respond similarly to that treatment
the treatment would be heterogeneity of variance For example, lots of variability in treatment group, not so much in control
noticeable amount of the unaccounted for variance may be due to subjects responding differently to the treatment Methods for dealing with the problem are outlined in Bryk and Raudenbush (hierarchical linear modeling), but one strategy may be to single out suspected
Resources
Zimmerman & Williams (1986)