Você está na página 1de 38

Advanced Research Methods I:

Variables
• Variables
• Operational definitions
• Validity
• Reliability
Variables
• Independent variables
– How do we pick and define these?

• Dependent variables
– How do we measure these?

• Extraneous variables
– Control variables
– Random variables

• Confound variables
Independent Variables

–Value is picked or manipulated by the


researcher
–Its value is what is different between
conditions of a study
–Shorthand is IV
–The set values or conditions of the IV
are called levels
Example
• Does practice style have an effect on task
performance?
• IV: practice style
• Manipulation:
– Continuous or massed practice
– Distributed practice
Types of independent variables
• Environmental variables-aspects of the
physical environment that can be brought
under the researcher’s direct control (typically
in an experiment)
– Stimulus or event is manipulated
• Task variables-aspects of a task in the study
that the researcher can vary
– Instruction is manipulated
• Subject variables-characteristics of the
subjects that are used as independent
variables in a quasi-experiment or an ex post
facto study
– there are (often pre-existing) differences between
the subjects in the different conditions
Example: Environmental variable

• RED • RED
• YELLOW • YELLOW
• GREEN • GREEN
• BLUE • BLUE
• PINK • PINK
Examples
Task variable:
-massed practice
-distributed practice

Subject variable:
-right-handed
-left-handed
Choosing your independent
variable
– Review the literature
– Do a pilot study
– Consider the costs, your resources, and
your limitations
– Be realistic: Pick levels found in the “real
world”
– Pick a large enough range to show the
effect
Independent Variable: Example
A researcher is interested in testing the effectiveness
of type of reading program on children’s reading
skills. Children are assigned to either a tutoring,
tutoring and rewards, or no tutoring and no rewards
group. Reading scores are recorded after two weeks
time.
• Levels of the IV: (3 levels)
– tutoring
– tutoring and rewards
– no tutoring and no reward
Operational Definition
• Conceptual definition-the definition of a
variable as used in everyday language or in the
dictionary
• Operational definition-specifies the precise
meaning of a variable within a study:
– Defines a variable in terms of observable
operations, procedures and measurements
– Important for replicability
Practice
A developmental psychologist is interested in
how the activity level of four-year-olds is
affected by viewing a 30-minute video of the
SpongeBob SquarePants or a 30-minute video
of Sid the Science Kid
Practice
• An industrial/organizational psychologist
believes that cooling the temperature of a
room may have an impact on productivity of
workers on an assembly line
Variables
• Independent variables

• Dependent variables
– How do we measure these?

• Extraneous variables
– Control variables
– Random variables

• Confound variables
Dependent Variables
• Dependent Variable – something that might
be affected by the change in the independent
variable
– What is observed
– What is measured
– The data collected during the investigation
• The variables that are measured by the
researcher
– They are “dependent” on the independent
variables (if there is a relationship between the IV
and DV as the hypothesis predicts).
Measuring the Dependent Variable
• How to measure your construct:
– Can the participant provide self-report?
• Rating scales
– Is the dependent variable directly observable?
• Choice/decision (sometimes timed)
– Is the dependent variable indirectly observable?
• Physiological measures (e.g. GSR, heart rate)
• Behavioral measures (e.g. speed, accuracy)
Measures of the Dependent Variable
• Accuracy-number of correct responses
• Latency-period between end of stimulus
display and beginning of response; also called
reaction time (RT)
• Duration-period between beginning and end
of response
• Frequency of behavior of interest
Dependent Variable: Example
• A researcher is interested in testing the
effectiveness of type of reading program (IV)
on children’s reading skills. Children are
assigned to either a tutoring, tutoring and
rewards, or no tutoring and no rewards
group. Reading scores are recorded after two
weeks time.
• DV: reading skills of children
• Measurement?
Issues on Errors in Measurement
• Reliability and Validity
• Reliability
– If you measure the same thing twice (or have two
measures of the same thing) do you get the same values?
• Validity
– Does your measure really measure what it is supposed to
measure?
• Does our measure really measure the construct?
• Is there bias in our measurement?
unreliable reliable reliable
invalid invalid valid

Reliability = consistency
Validity = measuring what is intended
Example

DV: Intelligence

 How do we measure the construct?


 How good is our measure?
 How does it compare to other measures of the
construct?
 Is it a self-consistent measure?
Reliability
Reliable Unreliable

• Test-restest reliability
– Test the same participants more than once
• Measurement from the same person at two different
times
• Should be consistent across different administrations
Reliability
• Internal consistency reliability
– Multiple items testing the same construct
– Extent to which scores on the items of a
measure correlate with each other
• Cronbach’s alpha (α)
• Split-half reliability
–Correlation of score on one half of the
measure with the other half (randomly
determined)
Reliability

• Inter-rater reliability
– At least 2 raters observe behavior
– Extent to which raters agree in their
observations
• Are the raters consistent?
– Requires some training in judgment
VALIDITY

CONSTRUCT INTERNAL EXTERNAL

CRITERION-
FACE
ORIENTED

PREDICTIVE CONVERGENT

CONCURRENT DISCRIMINANT

Many kinds of validity


Construct Validity
• Usually requires multiple studies, a large body
of evidence that supports the claim that the
measure really tests the construct
Face Validity

“This guy seems smart to me,


and
he got a high score on my IQ measure.”

• At the surface level, does it look as if the


measure is testing the construct?
External Validity
• Are experiments “real life” behavioral
situations, or does the process of control put
too much limitation on the “way things really
work?”
External Validity
• Variable representativeness
– Relevant variables for the behavior studied
• Subject representativeness
– Characteristics of sample and target population
• Setting representativeness
– Ecological validity - are the properties of the
research setting similar to those outside the lab
Internal Validity
• The precision of the results
– Did the change result from the changes in
the DV or does it come from something
else?
Threats to internal validity
• History – an event happens during the course of
the study
• Maturation – participants get older (and other
changes)
• Selection – nonrandom selection may lead to
biases
• Mortality – participants drop out or can’t
continue
• Testing – being in the study actually influences
how the participants respond
Variables
• Independent variables

• Dependent variables

• Extraneous variables
– Control variables
– Random variables

• Confound variables
Extraneous Variables: Any factors other than
the independent (or quasi-independent)
variables you’re considering that might affect
scores on the dependent variable.

Confounds: A confound or a confounding


variable is an extraneous variable which varies
systematically with your independent variable.
It provides an alternative explanation for your
results over and above your manipulation of
the IV. (ie. A difference between conditions or
groups other than what was intended).
• Random variables – may freely vary, to spread
variability equally across all experimental
conditions
– Randomization
• A procedure that assure that each level
of an extraneous variable has an equal
chance of occurring in all conditions of
observation.
• Control Techniques:

• 1. elimination-remove the EV from the condition
• 2. constancy of condition-make it a part of all conditions
• 3. balancing-balance out the effect of the EV by creating a
control group or using within-groups design
• 4. counterbalancing-to eliminate order effects, balance out
presentation of stimuli as well as response assignments
• ABBA, BAAB, ABAB, BABA
• 5. randomization-random selection, random assignment,
random ordering
Identifying potential problems
These are things that you want to try to avoid by careful
selection of the levels of your IV (may be issues for your
DV as well).

• Demand characteristics-characteristics of the study that


may give away its purpose
• Researcher bias-intentionally or unintentionally
communicating the purpose of the study; one solution is
to keep the researcher (as well as the participants)
“blind” as to what conditions are being tested or
measured
• Reactivity-social desirability
• Floor effects-when the task is too difficult
• Ceiling effects-when the task is too easy

Você também pode gostar