Você está na página 1de 3

Experimental Research - An attempt by the researcher to maintain control over all factors that may affect the result of

an experiment. In doing this, the researcher attempts to determine or predict what may occur.

Steps Involved in Conducting an Experimental Study


1. Identify and define the problem.
2. Formulate hypotheses and deduce their consequences.
3. Construct an experimental design that represents all the elements, conditions, and relations of the consequences.
4. Conduct the experiment.
5. Compile raw data and reduce to usable form. (word, table, statistics)
6. Apply an appropriate test of significance.

Essentials of Experimental Research


1. Manipulation of an independent variable
2. Control - an attempt is made to hold all other variables except the dependent variable
3. Observation - effect is observed of the manipulation of the independent variable on the dependent variable

Experimental Design - is a blueprint of the procedure that enables the researcher to test his hypothesis by reaching valid
conclusions about relationships between independent and dependent variables. It refers to the conceptual framework
within which the experiment is conducted.

Experimental Design Terminology


1. Treatment - something that researchers administer to subjects
2. Experimental Units - subjects or respondents
3. Blocks - categories of subjects within the treatment group
4. Factor - a controlled independent variable; general type or category of treatments
5. Level - amount or magnitude of factor
6. Randomness - the property of completely chance events that are not predictable
7. Randomization - a technique of assignment or ordering such that no consistent or systematic effect in the assignment
is tied in with the method

Validity of Experimental Design


1. Internal Validity - Does it seem reasonable to assume that the treatment has really produced the measured effect?
2. External Validity - With what other groups could we reasonably expect to get the same results if we used the same
treatment?

Factors Jeopardizing Internal Validity


1. History - the events occurring between the first and second measurements in addition to the experimental variable
which might affect the measurement
2. Maturation - the process of maturing which takes place in the individual during the duration of the experiment which
is not a result of specific events but of simply growing older, growing more tired, or similar changes
3. Pre-testing - the effect created on the second measurement by having a measurement before the experiment
4. Measuring Instruments - changes in instruments, calibration of instruments, observers, or scorers may cause changes
in the measurements
5. Statistical Regression - Groups are chosen because of extreme scores of measurements. Those scores or
measurements tend to move toward the mean with repeated measurements, sometimes even without an experimental
variable.
6. Experimental Mortality - The loss of subjects from comparison groups could greatly affect the comparisons because of
unique characteristics of those subjects.
7. Differential Selection - The selection of the subjects determines how the findings can be generalized. Subjects selected
from a small group or one with particular characteristics would limit generalizability. Randomly chosen subjects from the
entire population could be generalized to the entire population.
8. Placebo Effect - Since many patients are confident that a treatment will positively affect them, they react to a control
treatment which actually has no physical affect at all.
7. Interaction of Factors, such as Selection Maturation, etc. - Combinations of these factors may interact especially in
multiple group comparisons to produce erroneous measurements.

Randomized Experimental Designs


1. Completely Randomized Design - objects or subjects are assigned to groups completely at random
2. Randomized Block Design - experimental subjects are first divided into homogeneous blocks before they are randomly
assigned to a treatment group

R - randomization
O - observation
X - Treatment

Pre-Experimental Designs
- loose in structure, could be biased
Aim of the Research Name of the Notation Paradigm Comments
Design

To attempt to One-shot X O An approach that


explain a consequent experimental prematurely links
by an antecedent case study antecedents and
consequences. The
least reliable of all
experimental
approaches.

To evaluate the One group O X O An approach that


influence of a pretest-posttest provides a measure of
variable change but can provide
no conclusive results.

To determine the Static group X O Weakness lies in no


influence of a comparison O examination of pre-
variable on one experimental
group and not on equivalence of groups.
another Conclusion is reached
by comparing the
performance of each
group to determine the
effect of a variable on
one of them.

True Experimental Designs


- greater control and refinement, greater control of validity
Aim of the Research Name of the Notation Comments
Design Paradigm

To study the effect Pretest-posttest R O X O This design has been called


of an influence on a control group R O O "the old workhorse of
carefully controlled design traditional
sample experimentation." If
effectively carried out, this
design controls for eight
threats of internal validity.
Data are analyzed by
analysis of covariance on
posttest scores with the
pretest the covariate.

To minimize the Solomon four- R O X O This is an extension of the


effect of pretesting group design R O O pretest-posttest control
R X O group design and probably
R O the most powerful
experimental approach.
Data are analyzed by
analysis of variance on
posttest scores.

To evaluate a Posttest only R X O An adaptation of the last


situation that cannot control group R O two groups in the Solomon
be pretested four-group design.
Randomness is critical.
Probably, the simplest and
best test for significance in
this design is the t-test.

Você também pode gostar