Você está na página 1de 52

Day 6: Non-Experimental & Experimental Design

Where are the beakers??

What kind of research is considered the gold standard by the Institute of Education Sciences?
A. B. C. D. Descriptive Causal-Comparative Correlational Experimental

Why?

Why does most educational research use non-experimental designs?

What is the purpose of non-experimental designs?

Causal-Comparative Example
Green & Jaquess (1987)
Interested in the effect of high school students part-time employment on their academic achievement. Sample: 477 high school juniors who were unemployed or employed > 10 hours/wk.

Causal-Comparative Design
A study in which the researcher attempts to determine the cause, or reason, for preexisting differences in groups of individuals At least two different groups are compared on a dependent variable or measure of performance (called the effect) because the independent variable (called the cause) has already occurred or cannot be manipulated

Causal-Comparative Design
Ex-post facto
Causes studied after they have exerted their effect on another variable.

Causal-Comparative Design
Drawbacks
Difficult to establish causality based on collected data. Unmeasured variables (confounding variables) are always a source of potential alternative causal explanations.

Some Thought Questions

Correlational Design
Determines whether and to what degree a relationship exists between two or more quantifiable variables.

Correlational Design
The degree of the relationship is expressed as a coefficient of correlation Examples
Relationship between math achievement and math attitude Relationship between degree of a schools racial diversity and student use of stereotypical language Your topics?

Correlation coefficient

-1.00 strong negative

0.00

+1.00

strong positive no relationship

Advantages of Correlational Design


Analysis of relationships among a large number of variables in a single study Information about the degree of the relationship between the variables being studied

Cautions
A relationship between two variables does not mean one causes the other (Think about the reading achievement and body weight correlations) Possibility of low reliability of the instruments makes it difficult to identify relationships

Cautions
Lack of variability in scores (e.g. everyone scoring very, very low; everyone scoring very, very high; etc.) makes it difficult to identify relationships Large sample sizes and/or using many variables can identify significant relationships for statistical reasons and not because the relationships really exist (Avoid shotgun approach)

Cautions
Need to identify your sample to know what is actually being compared. If using predictor variables, time interval between collecting the predictor and criterion variable data is important.

Correlational Designs
Guidelines for interpreting the size of correlation coefficients
Much larger correlations are needed for predictions with individuals than with groups Crude group predictions can be made with correlations as low as .40 to .60 Predictions for individuals require correlations above .75

Correlational Designs
Guidelines for interpreting the size of correlation coefficients
Exploratory studies Correlations of .25 to .40 indicate the need for further research Much higher correlations are needed to confirm or test hypotheses

Correlational Designs
Criteria for evaluating correlational studies
Causation should not be inferred from correlational studies Practical significance should not be confused with statistical significance The size of the correlation should be sufficient for the use of the results (individuals vs groups)

Think
If you were going to take your action research topic, and create a causalcomparative study, what would it look like? --OR-If you were going to take your action research project, and create a correlational study, what would it look like?

Experimental Design
The Gold Standard?

To Review
Why is most educational research comprised of non-experimental research designs?

To Review
What is the purpose of non-experimental research?

To Review
How does the independent variable function in non-experimental research?

To Review
Can non-experimental research claim causality?

An example
Read the example given in class and in pairs respond to the questions

Experimental Research
Purpose
To make causal inferences about the relationship between the independent and dependent variables

Characteristics
Direct manipulation of the independent variable Control of extraneous variables

Experimental Designs
Single Group Post-test Single Group Pre-test Post-test Non-Equivalent Groups Post-test Quasi-Experimental Design Randomized Post-test only Randomized Pre-test Post-test Factorial

Examples

Experimental Validity
Internal validity
The extent to which the independent variable, and not other extraneous variables , produced the observed effect on the dependent variable

External validity
The extent to which the results are generalizable

Internal Validity
Threats that reduce the level of confidence in any causal conclusions Key Question: Is this a plausible threat to the internal validity of the study?

Threats to Internal Validity


History
Extraneous events have an effect on the subjects performance on the dependent variable Ex - The crash of the stock market, 9-11, the invasion of Iraq, etc.

Selection
Groups that are initially not equal due to differences in the subjects in those groups Ex - Positive and negative attitudes, high and low achievers, etc.

Threats to Internal Validity


Maturation
Changes experienced within the subject over time

Pretesting
The effect of having taken a pretest

Instrumentation
Poor technical quality (i.e. validity, reliability) or changes in instrumentation

Threats to Internal Validity


Subject attrition
Differential loss of subjects from groups

Statistical regression
The natural movement of extreme scores toward the mean

Diffusion of treatment
The treatment is given to the control group

Experimenter effects
Different characteristics or expectations of those implementing the treatments across groups

Threats to Internal Validity


Subject effects
The effects of being aware that one is involved in a study Four types
Hawthorne effect John Henry effect Resentful demoralization Novelty effect

Internal Validity
Key Point: Ultimately, validity is a matter of judgment. Ask if it is reasonable that possible threats are likely to affect the results.

External Validity
The extent to which results can be generalized from a sample to a particular population. Question Why would really good internal validity often result in poor external validity?

External Validity
Factors affecting external validity
Subjects
Representativeness of the sample in comparison to the population Personal characteristics of the subjects

Situations - characteristics of the setting


Specific environment Special situation Particular school

External Validity
Importance of explanation of sampling procedures

Experimental Designs
Single Group Post-test Single Group Pre-test Post-test Libby, Deb Non-Equivalent Groups Post-test Mary, Cheryl Quasi-Experimental Design Pete, Laura Randomized Post-test only Amanda, Nicole, Tam Randomized Pre-test Post-test Karen, Jen, Justin

Examples

Your Task
Based on the topic of your proposal, design an experimental study using the design you were assigned.
Write a research question and hypothesis. Sketch out the methods.

Identify strengths and weaknesses of each design.

Experimental Designs
Notation
R indicates random selection or random assignment O indicates an observation
Test Observation score Scale score

X indicates a treatment A, B, C, ... indicates a group

Pre-Experimental Designs
No pre-experimental design controls internal validity threats well Single group pretest only
A X O Internal validity threats
History, maturation, attrition, experimenter effects, subject effects, and instrumentation are viable threats Useful only when the research is sure of the status of the knowledge, skill, or attitude being changed and there are no extraneous variables affecting the results

Pre-Experimental Designs
Single group pretest post-test
AOXO Internal validity threats
Maturation and pretesting are threats History and instrumentation are potential threats

Useful when subject effects will not influence the results, history effects can be minimized, and multiple pretests and post-tests are used

Pre-Experimental Designs
Non-equivalent groups post-test only
A X O B O Internal validity threats
Definite Threat: Selection Potential Threats: History, maturation, and instrumentation

Useful when groups are comparable and subjects can be assumed to be about the same at the beginning of the study

Quasi-Experimental Designs
Types
Non-equivalent pretest/post-test, experimental control groups
AOXO BO O

Non-equivalent pretest/post-test, multiple treatment groups


A O X1 O B O X2 O

Useful when subjects are in pre-existing groups (e.g. classes, schools, teams, etc.)

Quasi-Experimental Designs
Threats to internal validity
Selection is the major concern Likely to control for most other threats, provided the groups are not significantly different from one another See Table 9.2 for specific threats related to each design

True Experimental Designs


Important terminology Random assignment
Subjects placed into groups by random Ensures equivalency of the groups

Random selection of subjects


Subjects chosen from population by random Ensures generalizability to the population from which the subjects were selected (i.e. external validity)

True Experimental Designs


Types
Randomized post-test only experimental control groups
RA RB X O O

Randomized post-test only multiple treatment groups


R A R B X1 O X2 O

True Experimental Designs


Types (continued)
Randomized pretest/post-test multiple treatment groups
R A O X1 O R B O X2 O

Randomized pretest/post-test experimental control groups


R A O X O R B O O

True Experimental Designs


Threats to internal validity
Controls for selection, maturation, and statistical regression Likely to control for most other threats See Table 9.2 for specific threats related to each design

Evaluating Experimental Designs


Criteria for evaluating experimental research
The primary purpose is to test causal hypotheses There should be direct manipulation of the independent variable There should be clear identification of the specific research design

Evaluating Experimental Designs


Criteria for evaluating experimental research
The design should provide maximum control of extraneous variables Treatments are substantively different from one another The number of subjects is dependent on or equal to the number of treatment replications

Você também pode gostar