Você está na página 1de 14

Exam 1 Study Guide

Chapter 1

Empiricism
Idea that knowledge is based on observation. Data allows us to form conclusions
Goals of Science
Description of behavior
Careful observation of behavior
Can be done through simple observation (watching behavior), surveys, tests, etc.
Often goal is to see if one thing causes another, or one thing is related to another
Example: How does self-esteem change across the lifespan?
Prediction of behavior
Allows us to anticipate events
Example: You conduct a study, and find that students with high self-esteem get
better grades than students with low self-esteem. If a student has low self-esteem,
there is a high probability that they will do poorly in their classes.
Determining causes of behavior
Why does a behavior happen?
To change behavior, we need to know why it happens
Remember, CORRELATION DOES NOT EQUAL CAUSATION!

Determining causes of behavior


3 things required to determine causality
Temporal precedence: cause always precedes the effect
Covariation of cause and effect: when cause is present, effect occurs.
When cause isnt present, effect doesnt occur
Elimination of alternative explanations: nothing other than causal variable
could be responsible for effect occurring
Explanation of behavior
Explain why event occurs
Example: College students who watch a lot of TV have lower grades, because
they spend less time studying than students who dont watch TV.
Description, prediction, determination of cause, and explanation all closely intertwined
These 4 components often need to be revised or discarded as new information comes out
Behavioral science is constantly evolving and moving forward!

Basic research
Tries to answer fundamental questions about nature of behavior
Common topics: cognition, emotion, motivation, learning, neuropsychology, personality,
social behavior
Example research question: Do adults over 65 have less working memory than adults
under 30?

Applied research
Conducted to address issues in which there are practical problems and potential solutions
Note: basic research often informs applied research
Example research question: How can computer work stations be modified to account for
working memory deficits in older workers?
Program evaluation: assesses social reforms and innovations in education, government,
health care, and other institutions
Example: Is new district program for students with learning disabilities improving
these students reading abilities?

Chapter 2

Hypothesis
Tentative idea or question waiting for evidence to support or refute it
Characteristics of Theories
1. Theories must be testable
must be able to falsify
2. Organize & Explain Knowledge
3. Generate New Knowledge
Self-perception theory (Lepper, Greene, & Nisbett, 1973)
We understand our behavior by observing it
If a person is paid very little for a dull and boring task, then they should find the task
enjoyable
4. Past Research
Every study raises new issues

Use a different setting


Use a different participant population
Use a different methodology
Inconsistencies in results
5. Practical Problems
Should tuition be raised?
How high?
How can we help the students meet the increased cost?
All APA-style papers need to include:
A title
Authors and author information
An abstract
An introduction
A method section
A results section
A discussion section
References
Appendices (if applicable)
Tables (if applicable)
Figures (if applicable)
Title
Concise
Gives information about the study
Good title: An examination of the effect of self-esteem on college student GPA
Bad title: Self-esteem and college students
Author name
First, middle, last (no titles)
Institutional affiliation
Author note
Complete departmental affiliation
Changes in affiliation (if applicable)
Acknowledgements (funding, etc.)
Special circumstances (conference presentation)
Person to contact
APA style
Abstract: concise summary of article
Usually between 50 and 200 words

Should describe:
Problem under consideration
Participants
Basics of study method
Basic findings
Conclusion
Introduction
Presents problem under consideration
Explores importance of problem: why should reader care?
Describes relevant scholarship (previous research)
State hypotheses
Methods: describes how study was conducted
Operational definition of variables
Participant characteristics
Sampling procedures
Sample size and power
Measures
Research design
Experimental manipulations
Results: what did you find?
Recruitment of participants and results of this
Statistics and data analysis
Additional analyses: exploratory, subgroup, etc.
Manipulation fidelity
Adverse events
APA style
Discussion: evaluate results and draw conclusions
Were hypotheses supported?
Interpret results
Acknowledge limitations
Suggest areas for future research
Brief overall conclusion
References
Should include all work cited in the paper
Note: Within paper, literature cited parenthetically (Author, year). In the
reference section, the full citation (author, year, title, journal, volume,
page number, doi) is included
APA style guidelines are fairly strict on this: see p. 193-214
Appendices and supplemental materials: important information that would make the
paper itself too cumbersome to include within it
Scale items
Analysis code
Technical formulations and proofs of statistical methods
Stimulus materials
Tables and figures: can be used to display results in an easier-to-interpret way
Placed at end of manuscript for journal submission

During editing, journal editor places them into the paper in the correct place
Tables and figures should be straightforward and easy to read
Document formatting requirements:
Double-spaced
1-inch margins
Page number and header on top of each page

Chapter 3
Beneficence: need for research to maximize benefits and minimize any possible harmful
effects of participation
Informed consent provides participants with important study information so they can
make an informed decision about whether to participate (associated with autonomy
principle)
Generally includes:
Purpose of research
Procedures used, including time involved
But, dont need to tell participants everything if it would negatively affect
study: just a general idea
Risks and benefits
Compensation (if any)
Confidentiality
Assurance of voluntary participation and permission to withdraw
Contact information for questions
Should be written in straightforward, simple language
No jargon
8th grade reading level usually recommended
Not written in first person
Non-English speaking participants should receive translated copy
See Figure 3.1 on p. 45 for detailed checklist

Autonomy issues
Some populations (children, patients in psychiatric hospitals, developmentally or
cognitively impaired adults) cant make informed decision
Usually require someone to give permission (such as parent) in addition to
participants consent
Coercion: individuals may feel that they have no choice but to participate
Examples: prisoners, students, employees
Need to ensure that individuals are aware that they are not required to participate
Excessive compensation/benefits can be type of coercion
Keep benefits in line with what research requires of participants

Debriefing

Happens after completion of study


Researcher tells participant what purpose of research was
Researcher explains any deceptions that occurred, and why they occurred
Researcher offers participant resources if they need to speak with someone about the
study
Justice
Historically, high-risk research conducted with marginalized groups who may not have
had power to refuse to participate
Example: Tuskegee Syphilis Study (1932-1972)
Participants: 399 poor African Americans in Alabama
Not treated for syphilis in order to study long-term effects of disease
Decisions to include or exclude certain groups from research need to be based on
scientific grounds
Institutional review board
Institutional review board (IRB): reviews research conducted within institution to ensure
ethical standards maintained
All universities with federal funding have an IRB
Must have at least 5 members, 1 of which must be from outside institution
All research conducted by faculty, students, and staff must be approved by IRB
Institutional review board
Must submit application to IRB before beginning research
IRB can make several decisions:
Exempt: no risk to participants; study does not need further IRB review
Examples: Studies with archival data where data cant be linked;
naturalistic observation in public place; survey research with anonymous
responses and no stress to participants
Exempt decision must be made by IRB: researcher cant decide this on
their own
Minimal risk research: risks of harm no greater than those encountered in daily
life
Examples: routine physiological testing; moderate exercise; research
related to individual or group behavior or characteristics of individuals
No special safeguards required
Quick, straightforward IRB review
May get expedited review instead of full board review
Greater than minimal risk: places participants at risk of physical or psychological
harm
Examples: research involving physical or psychological stress, invasion of
privacy, collection of sensitive information
Requires full IRB board review
Must be reviewed at least annually
Fraud and plagiarism
Fraud: fabrication of data
Dont do this. Period.

Plagiarism: misrepresenting someone elses work as your own


Always cite sources. If an idea wasnt yours, it needs to be cited.
If directly quoting, use quotation marks, and cite properly.
Chapter 4

Variables
Some variables have numeric properties associated with them
Height
Cognitive ability
Extraversion
Some variables have categories
Gender
Geographic location
Major
The type of variable (quantitative vs. categorical) determines what kind of statistical
analyses can be conducted
Operational definition: set of procedures used to measure or manipulate variable
Need operational definition to empirically study variable
Example: Is personality related to academic performance?
What do we mean by personality?
Variable: event, situation, behavior, or individual characteristic that varies
Has to have 2 or more levels
Variable examples:
Gender
Job satisfaction
Reading skill
Self-esteem
Hair color
Height
Agreeableness
What do we mean by academic performance?
Better question: Is conscientiousness, as measured by the IPIP 20-item
conscientiousness scale, related to college students GPA?
Construct validity: does the operational definition of a variable actually measure the
variable?
Important implications for scale and test construction
If operational definition doesnt capture the variable of interest, cant make
meaningful conclusions from study
Example: using GPA as a measure of cognitive ability. Is GPA a good
operationalization of cognitive ability?
Independent variable (IV): variable being manipulated by experimenter; variable
believed to be causal variable

Dependent variable (DV): measured but not manipulated. Hypothesized to change due
to manipulation of independent variable.
Example: Study comparing several remedial reading programs to see how they affect
student reading skill
IV: remedial reading program
DV: reading skill
Relationships between variables

Relationships between variables

Positive linear relationship: If value of one variable increases, the value of the other
variable increases

Negative linear relationship: as value of one variable increases, the value of the other
variable decreases

No relationship: as value of one variable changes, value of other variable doesnt change

Curvilinear relationship: two variables are related, but not always in same direction

Nonexperimental methods
Relationships studied by making observations or measures of behaviors of interest
Variables observed as they naturally occur
Example: As age increases, positive mood increases.

Direction of cause and effect cant be established


Example: People who exercise more have lower stress. Does exercise reduce stress, or do
people who are highly stressed have little time available to exercise?

Experimental method
Involves direct manipulation and control of independent variables
Manipulate independent variable(s), and measure effect(s) on dependent variable(s)
Can evaluate cause and effect
Experimental method
Experimental control: extraneous variables kept constant
Example: Does increased caffeine consumption improve reaction time?
Could bring people into the lab first thing in the morning and tell them to avoid
caffeine prior to arrival
Can ensure experimental and control groups equivalent
Caffeine could be given only to experimental group
Randomization: Dont know which participant characteristics will affect dependent
variable
No way to guarantee groups are 100% the same
Randomly assigning participants to groups ensures that potentially confounding variables
will likely be spread evenly between groups
Randomization helps to control effect of any variables that cant be held constant

Third-variable problem: unmeasured variable may be creating relationship between 2


measured variables
Example: There is a significant positive correlation between ice cream consumption and
homicide rates.
Third variable: Heat. Heat leads to increased ice cream consumption, and
increased violent crime.
Confounding variable: third, uncontrolled variable that we know is affecting our results.
Example: Relationship between organizational level and pay satisfaction: Gender
could be confounding variable
Internal validity
Internal validity: ability to draw conclusions about causal relationships from results of
study
High internal validity: Can be reasonably certain that one variable caused changes in the
other
Strong internal validity requires:
Temporal precedence
Covariation between variables
Alternative explanations can be eliminated
External validity
External validity: ability to generalize results to other populations and settings
Examples:
Do findings from a study using undergraduate students apply to adults in
the workforce?
Does the relationship between conscientiousness and academic
performance hold when GPA is used to measure academic performance
instead of class rank?
Do work-family conflict theories based on studies conducted in the U.S.
apply to dual-income households in Japan?
External validity
In a single study, often a trade-off between internal and external validity
Artificiality of highly controlled experiments contrasts with real-world settings
But, nonexperimental methods often have poor internal validity

Why experimental methods arent always best


Artificiality of experiments: carefully controlled laboratory experiments rarely mimic
real-world situations
Field experiments: trade-off between experimental and non-experimental
methods
Independent variable manipulated in natural setting
Example: To examine whether new cashier training program improves job
performance, experimenter could randomly select half of the new cashiers
at a store to go through the new training program, and the other half to go

through the old training program. Job performance could be compared


between the 2 groups

Ethical and practical concerns


Many things cant be ethically manipulated: drug use, childhood abuse, bullying,
etc.
Many variables cant be reasonably manipulated: childhood memories, parental
relationships, etc.
Person-level attributes, such as ability and personality, cant be manipulated
Why experimental methods arent always best
Sometimes goal is merely to describe behavior
In this case, experiment isnt needed
Example: Why do individuals choose to be affiliated with a political party?
If you want to predict behavior, cause and effect may not be important
Example: If there is a strong relationship between narcissism and workplace theft,
dont need to know whether cause-effect relationship holds
Using multiple methods
No study is perfect
Best understanding of a phenomenon will happen when multiple studies using multiple
research methods are used
Example: Does negative performance feedback cause stress?
Study 1: Conduct laboratory experiment with college students
Study 2: Conduct laboratory experiment with working adults
Study 3: Conduct field study comparing stress levels of employees who
received negative performance appraisal to employees who received
positive performance appraisal

Chapter 5

Reliability
Reliability: consistency or stability of a measure of behavior
Scores dont change much from one administration to the next
Classical test theory: Observed scores on a measure have 2 parts:
True score: actual score on the variable if we could measure it perfectly
Measurement error: anything other than true score that affects the observed score
X=T+E
Reliability

Unreliable measure example: A measure of intelligence is given to Marie 3 times within


1 week. Her score the first time is 50, her score the second time is 100, and her score the
third time is 15.
Reliable measure example: A measure of intelligence is given to John 3 times within one
week. His score the first time is 50, his score the second time is 52, and his score the
third time is 49.

Test-retest reliability
Measure the same individuals at 2 points in time
Test-retest reliability measured by calculating correlation between scores at time 1 and
scores at time 2
Should be at least .80
Problems:
Participants remember questions from time 1 to time 2, artificially inflating
correlation
Difficult to test same people twice
Some variables are expected to change over time: mood, stress, etc.
Internal consistency reliability
Requires only one administration of test
Evaluates reliability by looking at degree to which answers to items correlate with one
another
Several ways to evaluate internal consistency
Split-half reliability: correlation of total score on one half of test with total score on
second half of test
Good way to do this: randomly select items for each half
Bad way to do this: correlate the first half of items with the second half of items
More items=more reliability: have to use Spearman-Brown split-half reliability
coefficient to correct for loss of items
Internal consistency reliability

Cronbachs alpha: average of all possible split-half reliability coefficients


Scores on each item correlated with scores on every other item
Can be easily calculated with SPSS
Item-total correlations: correlation of each item score with total score based on all items
Good for screening items
Items with low item-total correlations can be removed from measure
Interrater reliability
Extent to which raters agree in their judgments
Example: 3 clinicians rate an individual on level of psychopathy
Limitations of reliability
Reliability very important!
But, a measure can be reliable but still not tell us what we need to know
Example: You develop a measure of personality which involves measuring the
circumference of someones head. This measurement will be the same every time its
administered (its reliable), but its still not telling us anything about personality
The measure is not valid
Construct validity
Construct validity related to measurement: is measure actually measuring what it is
supposed to measure?
Example: Is intelligence test measuring intelligence, or something else?
Construct validity related to variables: does operational definition actually reflect true
meaning of variable?

Example: Is GPA accurately reflecting academic ability?


Multiple different types of evidence can be used to support construct validity
Face validity
Measure seems to measure what its supposed to measure
Example: High face validity extraversion item: I enjoy meeting new people.
Problem: Face validity not always tied to construct validity
Highly face-valid measures may not measure what theyre supposed to measure
Magazine personality tests
http://www.parents.com/parents/quiz.jsp?quizId=/templatedata/ab/quiz/dat
a/BirthOrderQuiz_03052004.xml
Measures with low face validity can be measuring what theyre supposed to
measure
Implicit association tests
https://implicit.harvard.edu/implicit/selectatest.html
Content validity
Does content of measure capture all of the content that defines the construct?
Example: Job performance measure includes only items related to teamwork (low
content validity)
Job performance measure includes items related to all major aspects of the job
(high content validity)
Useful, but still not sufficient
Predictive validity
Do scores on a measure predict an outcome?
Scores gathered on measure, and then outcome measured at a later time
Examples: A depression inventory should relate to future depression diagnosis, a job
knowledge test should relate to future job performance, SAT scores should relate to firstyear freshman GPA
Can analyze this using correlation and/or regression
Concurrent validity
Looks at scores on a measure and how they relate to an outcome
Can use correlation, regression, or a test of mean differences
Unlike predictive validity, measure and outcome scores collected at same time
Two ways this is used:
Do 2 groups known to be different on a construct score differently on the measure
of that construct?
Example: Do people with high GPA score higher on an academic
achievement scale than people with low GPA?
Do people who score differently on a measure of a construct behave differently in
a situation?
Example: Do people with low scores on a job knowledge test have lower
job performance than people with high scores on a job knowledge test?

Convergent validity

Extent to which scores on a measure are related to scores on measures of same or similar
constructs
Example: Scores on a new self-esteem questionnaire should be positively related
to scores on two older self-esteem questionnaires
Example: Scores on a new intelligence test should be positively related to scores
on a problem-solving test
Can use correlation
Discriminant validity

Extent to which scores on a measure are not related to scores on unrelated measures
Example: Scores on an extraversion questionnaire should not be positively related
to scores on an introversion questionnaire
Example: Scores on an intelligence test should not be related to scores on an
agreeableness test
Can use correlation

Measurement scales
Nominal scales: no numerical properties
Categories used to differentiate responses
Examples
Gender
Major
City
Very limited in types of statistical analyses that can be performed
Ordinal scales: levels of variable can be rank-ordered
But, intervals between items not consistent or known
Example:
Rank foods in order of spiciness
The food ranked as first and the food ranked as second might be very close in
spiciness, but the third-ranked food might be much less spicy than the food ranked
second
Still very limited on types of analyses that can be used
Interval scales: difference between numbers on scales is meaningful
Difference between a 1 and a 2 is the same as the difference between a 2 and a 3
Have arbitrary 0
0 does not indicate complete absence of the trait
Scoring a 0 on an intelligence test would not mean that a person had absolutely no
intelligence
Can do a wide variety of statistical analyses
Measurement scales
Ratio scales: difference between numbers on scale is meaningful
Scale has absolute 0

0 indicates a total absence of the quality


No psychological scales fit this requirement
Measurement scales
When possible, use interval-level scales
More quantitative information
More statistical analysis options

Você também pode gostar