Escolar Documentos
Profissional Documentos
Cultura Documentos
Variables
• Variables
• Operational definitions
• Validity
• Reliability
Variables
• Independent variables
– How do we pick and define these?
• Dependent variables
– How do we measure these?
• Extraneous variables
– Control variables
– Random variables
• Confound variables
Independent Variables
• RED • RED
• YELLOW • YELLOW
• GREEN • GREEN
• BLUE • BLUE
• PINK • PINK
Examples
Task variable:
-massed practice
-distributed practice
Subject variable:
-right-handed
-left-handed
Choosing your independent
variable
– Review the literature
– Do a pilot study
– Consider the costs, your resources, and
your limitations
– Be realistic: Pick levels found in the “real
world”
– Pick a large enough range to show the
effect
Independent Variable: Example
A researcher is interested in testing the effectiveness
of type of reading program on children’s reading
skills. Children are assigned to either a tutoring,
tutoring and rewards, or no tutoring and no rewards
group. Reading scores are recorded after two weeks
time.
• Levels of the IV: (3 levels)
– tutoring
– tutoring and rewards
– no tutoring and no reward
Operational Definition
• Conceptual definition-the definition of a
variable as used in everyday language or in the
dictionary
• Operational definition-specifies the precise
meaning of a variable within a study:
– Defines a variable in terms of observable
operations, procedures and measurements
– Important for replicability
Practice
A developmental psychologist is interested in
how the activity level of four-year-olds is
affected by viewing a 30-minute video of the
SpongeBob SquarePants or a 30-minute video
of Sid the Science Kid
Practice
• An industrial/organizational psychologist
believes that cooling the temperature of a
room may have an impact on productivity of
workers on an assembly line
Variables
• Independent variables
• Dependent variables
– How do we measure these?
• Extraneous variables
– Control variables
– Random variables
• Confound variables
Dependent Variables
• Dependent Variable – something that might
be affected by the change in the independent
variable
– What is observed
– What is measured
– The data collected during the investigation
• The variables that are measured by the
researcher
– They are “dependent” on the independent
variables (if there is a relationship between the IV
and DV as the hypothesis predicts).
Measuring the Dependent Variable
• How to measure your construct:
– Can the participant provide self-report?
• Rating scales
– Is the dependent variable directly observable?
• Choice/decision (sometimes timed)
– Is the dependent variable indirectly observable?
• Physiological measures (e.g. GSR, heart rate)
• Behavioral measures (e.g. speed, accuracy)
Measures of the Dependent Variable
• Accuracy-number of correct responses
• Latency-period between end of stimulus
display and beginning of response; also called
reaction time (RT)
• Duration-period between beginning and end
of response
• Frequency of behavior of interest
Dependent Variable: Example
• A researcher is interested in testing the
effectiveness of type of reading program (IV)
on children’s reading skills. Children are
assigned to either a tutoring, tutoring and
rewards, or no tutoring and no rewards
group. Reading scores are recorded after two
weeks time.
• DV: reading skills of children
• Measurement?
Issues on Errors in Measurement
• Reliability and Validity
• Reliability
– If you measure the same thing twice (or have two
measures of the same thing) do you get the same values?
• Validity
– Does your measure really measure what it is supposed to
measure?
• Does our measure really measure the construct?
• Is there bias in our measurement?
unreliable reliable reliable
invalid invalid valid
Reliability = consistency
Validity = measuring what is intended
Example
DV: Intelligence
• Test-restest reliability
– Test the same participants more than once
• Measurement from the same person at two different
times
• Should be consistent across different administrations
Reliability
• Internal consistency reliability
– Multiple items testing the same construct
– Extent to which scores on the items of a
measure correlate with each other
• Cronbach’s alpha (α)
• Split-half reliability
–Correlation of score on one half of the
measure with the other half (randomly
determined)
Reliability
• Inter-rater reliability
– At least 2 raters observe behavior
– Extent to which raters agree in their
observations
• Are the raters consistent?
– Requires some training in judgment
VALIDITY
CRITERION-
FACE
ORIENTED
PREDICTIVE CONVERGENT
CONCURRENT DISCRIMINANT
• Dependent variables
• Extraneous variables
– Control variables
– Random variables
• Confound variables
Extraneous Variables: Any factors other than
the independent (or quasi-independent)
variables you’re considering that might affect
scores on the dependent variable.