Você está na página 1de 3

Chapter 3

Measurement

Variables
Characteristics or conditions that change
Have different values for different people
Different types of variables:
Demographic, quantitative or categorical
MUST BE ADEQUATELY MEASURED
Quantitative variables
Variables that denote a quantity or number on a scale
Categorical variables
Simply put people into different categories (usually a fixed
amount)
In order to see differences among people we must measure
changes in variables
Variables must be defined and measured
Critical step in beginning of research process
Do men cheat more than women? In different ways?
Construct
Construct: abstract attribute that cannot be directly observed;
hypothetical entities
How you define and measure constructs in your study will MAKE
OR BREAK your study
Are influenced by external stimuli and measured by external
behavior
Step 1 in measuring a construct is to provide an operational
definition
Operational definitions: Provide specific parameters to a
broad concept or construct; how variables will be
treated/measured in your study

Operational Definitions
How do you come up with them?
Measurement
Measuring constructs
Reliability consistency of measure
Validity degree to which measure assesses underlying
constructs

Reliability

Reliability Producing similar results during repeated


measurements for the same individual under same conditions
Error
Additional variables that might throw off the test
Temperature in the room, fatigue, random error (luck)
Error averages to zero over repeated measures
Causes of error:
Observer simple human error
Environmental small changes that increase error
Participant
Types of Reliability:
Test-retest reliability
Parallel-forms reliability
Inter-rater reliability
Split-half reliability
Test-retest reliability
Test-retest Technique
Test-retest: Psychometric Information
Test-Retest
Simple correlation (r = 0 to 1.0)
.70 is good but cutoff is debated
Best with stable characteristics
Problem Example: Relationship questionnaire
Practice effects: correlation is due not to true reliability but
because participants remember how they answered

Parallel-forms reliability
Fixes practice effects

Inter-rater reliability
Assessing Reliability
Simultaneous measurements
Inter-rater reliability
Inter-rater: Psychometric Information
Inter-rater reliability
Cohens Kappa ( = 0.0 to 1.0)
.75 is rule of thumb
Simple correlation
Percentage agreement

Split-half reliability

AKA: Internal Consistency


Assessing Reliability
Internal Consistency
Split half reliability
Internal Consistency: Psychometrics
Internal consistency (split-half)
Cronbachs Alpha ( = 0 to 1.0)
.7 is OK but .8 is great and anything over .95 is redundant

Você também pode gostar