Você está na página 1de 8

STATISTICS & METHODOLOGY

SINGLE DATA POINTS

It is very difficult, if not impossible to make any sort of inference based on a single data point. Psychology is the
investigation of behavior that occurs regularly. Use Winnipeg murder frequencies for the last few years as a basis for
talking about what can reasonably be stated based on a single number, or even a set of numbers.

Total murders reported in Winnipeg in the following years:


1991 - 16
1992 - 12
1993 - 15
1994 - 17
1995 - 16

It's difficult to interpret what any one statistic is saying. Indeed, descriptive statistics don't say anything; they are
mute. Only inferential statistics say things; they are the talkers of the statistical world. But even inferential stats must
be interpreted by someone who knows their language. There are two types of statistics: descriptive and inferential.

Descriptive statistics are the numbers that describe the phenomenon of interest. Typically, they are simply
measurements or counts. Descriptive stats are the basic data. Examples of descriptive stats are height, games won,
stars given to movies, and annual income.

Inferential statistics are more complex, because they are calculated using the descriptive statistics so that some
conclusion can be inferred from the data. We won't discuss inferential stats very much here, but some examples of
inferential stats are analysis of variance (ANOVA), t-tests and multiple regression. As you can see, inferential stats
are technical and unusual while descriptive stats are common and typical.

SAMPLES & POPULATIONS

Statistics are created out of data (evidence). To create some statistics, you must collect some data. Having decided
that you are going to collect data, you must decide what you are going to collect it on. What is your phenomenon of
interest? What level of analysis are you going to use in your study?

The phenomenon you are interested in is usually found in a large number of individuals (whether they are humans or
animals). These individuals as a group are referred to as the population. Usually data cannot be collected from the
entire population, because of time or money limitations (life is short and often costly). Thus, we typically must
collect data from only a part of the population. That part is called the sample.

A sample should be (i) represenative of the entire population, (ii) randomly chosen, so that each member of the
population has an equal opportunity to participate in the study, and (iii) large, as results from larger samples are
more easily interpreted and presumably reflect the population better.

DISTRIBUTIONS

Once you have collected the data, you can lay it all out to see what you have. Laying it out means making a line of
the dimension of interest that includes each of the scores that were obtained, and then placing one x or whatever for
each obtained score above the line. When you have done that for all of the numbers that you collected, you are
looking at the distribution of the data. That is, what range of numbers or scores have been collected from the sample.

The normal distribution is also know as the bell curve because of its shape. For many different measures, if you
collect enough data, you will find that the distribution of the data will be a normal distribution. Mathematically, the
normal distribution is very nice to work with, because it has special characteristics that make it easy to describe the
data in the distribution.
However, distributions are not always normal. The most common abnormal distributions are skewed distributions,
which may be positively skewed (e.g., RT, income), or negatively skewed (e.g., height).

Knowing the distribution of your data is a good thing. However, if someone comes along and asks about your data,
the distribution is usually not what they want to see, because that shows them each and every bit of data, each datum
as it were. Often, they don't want that much information. Rather, they want a summary. The summary of the data
distribution can be given to them with statistics.

The primary summary statistics measure two things aspects of the data: the central tendency or an average of the
distribution, and the measure of variation in the distribution. Q: Are these summary statistics descriptive or
inferential statistics? A: They are descriptive statistics, because they are merely describing the data distribution.

AVERAGE (Central Tendency)

All averages are not created equal. Indeed, there are not one, not two, but THREE averages that we sometimes use
in psychology. There is the mode, which is the most frequently occuring score in the data. There is the median,
which is the middle-most score in the data. And last, but not least - indeed, it is probably the one that you think of
when you think of average - there is the mean, which is the product of summing the scores and dividing by the
number of scores that were summed. For the data: 2, 2, 4, 7, 15 -- mode = 2, mdn = 4, mean = 6

VARIATION

One measure of variation is the range of the scores. That is, the difference between the highest
score and the lowest score. The primary measure of variability though is the standard deviation,
which is a measure of the spread of the scores out from around the central tendency, or mean.
Bigger standard deviation scores, indicates there is more variation in the data distribution.
Distributions that have large variations are more difficult to distinguish statistically than distributions
that have small distributions.

CORRELATION

Correlation is a mathematical procedure for examining the relationship between two measures or variables, which
results in a statistic called a correlation coefficient.
Q: Is the correlation coefficient a descriptive or an inferential statistic?
A: Descriptive.

The correlation coefficient is a descriptive statistic, so from it alone we can make no inference about the statistical
significance of a relationship. To make that inference, we have to use an inferential stat to test whether the
correlation is equal to zero or not.

The range of correlations runs from -1.00 to +1.00. The weakest relationship is 0.00, which indicates that the two
variables do not covary at all. A +1.00 correlation indicates that the variables covary in a perfectly positive, or
direct, manner, and a -1.00 correlation indicates that the variables covary in a perfectly negative, or indirect, manner.

The most important thing to know about correlation is that if two variables are correlated, that does not mean that
one of them causes the other. That is, correlation does NOT imply causation. It does not, because is there could be
some other third variable causing both of the phenomenon that are covarying.

Height and weight example, also low self-esteem -- depression, with distressing events, or biological events as the
third 'active' factor.

Methodology

HYPOTHESIS
Before you select your study method, you must have a statement of inquiry. That is a statement of what you think
your study will find. That statement is a hypothesis. Hypothesis is related to hypothetical. A hypothesis is a
statement of what your study results might be. The hypothesis will be tested using inferential statistics. Based on the
outcome of those inferential statistics, you will decide whether your hypothesis is consistent with the data you
collected. Some people suggest the primary activity of science is hypothesis testing.

OPERATIONAL DEFINITIONS

In stating the hypothesis, the variables of interest need to be clearly defined. The variables should be operationally
defined. For example, if you are studying aggression, you might define aggression as the number of times your
subjects bop each other on the head. Thus, aggression - a very abstract concept - has been clearly defined for the
purposes of your study.

An important thing to notice here is that operational definitions do not mean that the thing being defined is equal to
what it is being defined as. That is, head bops are not equal to aggression. Aggression can be - and is - more than
head bopping, and head bopping is not necessarily aggessive.

This may seem obvious to you, but there are some people who do studies on abstract things that are operationally
defined, and then make broad generalizations about those abstract things, which may not be appropriate.

For example, the person who studies aggression and finds that people are more likely to bop each other on the head
after watching a Jean-Claude Van Damme movie than after watching "The Sound of Music" may claim that Van
Damme movies lead to more aggression. Although the Van Damme film does objectively contain more violent and
aggressive actions than "The Sound of Music," it may be that in the Van Damme film was really bad, but contained
a scene in which Van Damme gets his head bopped by a female character; the people who see the film make fun of
it by re- enacting that head bopping scene.

It is important to define what you are talking about in your research, but the phenomenon that you are interested in is
likely bigger than the definition that you give it. Do not be fooled into believing that an operational definition is all
there is to say about a concept. This is especially common in thinking about concepts which can be measured by
some psychological test. For example, an IQ test is not all there is to intelligence. It may be part of what intelligence
is about, but it is not all of it. This applies to all other psychological concepts.

METHODS OF STUDY

The data that we have been talking about had to come from somewhere. It had to be collected somehow. There are a
few different ways of collecting data.

Case Study. This method studies one individual (or case) in detail. This is the method that Freud used in developing
his theories of personality and that Piaget used in developing his theories of human development. Present day
cognitive neuroscience still uses often this technique. The shortcoming of this method is that any one individual can
be an anomaly, or a unique case. As mentioned earlier, unique data points are difficult, if not impossible to interpret.
Nevertheless, much can be learned from case studies, but any generalizations from them should be cautiously made.

Survey. Many people are asked a few questions about some particular aspect of their life, or for their opinion on
some aspect of the world. This is a good method of gathering data from many different people in a short span of
time. There are several factors to consider when making a survey or interpreting survey results: (i) who was
surveyed and how many of them were surveyed? representativeness is important, and the more people surveyed the
better (e.g., 3 of 4 dentists surveyed....) (ii) what was the wording of the survey questions? the wording of questions
asked of people can influence their answers. also, the order in which the questions are asked can make a difference
(e.g., asking married people about the number of times a week they have sex before asking them to rate their marital
satisfaction can influence those satisfaction ratings). Finally, (iii) you have to trust that how people have responded
to the survey is in fact the truth; people's responses may not be what is actually the case because they are delibrately
misleading you, or because they are erroneously recalling the relevant information.
Naturalistic Observation. As the name suggests, naturalistic observation involves observing behavior as it occurs in
its natural environment. For example, observing children in a classroom, observing sibling interaction in their
homes, or observing animals in their natural habitat.

Experimentation. From those 3 methods of study, NO conclusions can be made about cause and effect. No
conclusions about cause and effect can be made, because nothing has been manipulated. Manipulation is simply the
introduction of an element into a situation for some people, and not for others. Only through manipulation can there
be clear and conclusive statements of cause and effect among the elements in the study.

Both types of statistics come into this, because descriptive stats (usually the mean and standard deviation) will be
used to summarize the data collected from the participants in each condition. Then inferential statistics will be used
to compare the conditions to determine whether they are the same or different (the notion of statisical significance).
Depending on the outcome of the inferential statistics, we can say that our hypothesis is supported or not supported.

The variables in the experimental design that the experimenter manipulates are the independent variables. These are
the variables that researchers control and manipulate so that the phenomenon of interest is clearly and effectively
studied. Independent variables are independent of the participants in the study. Study participants are not asked what
condition they want to be in, rather they are assigned to a condition by the researcher. The dependent variables are
what is measured, so dependent variables are the participants responses in the experiement. Those measures are
dependent on the participants. Examples include test scores, percentage correct, and reaction time. The other factors
in the experiment are the extraneous variables, which may have an effect on the study. Researchers try to control
them by keeping them constant in each study condition. For individual differences among the participants,
researchers will control that through randomization, although it may also be achieved through exclusion. For
example, if a study involves responding to colours on a computer screen, then an experimenter would want to
exclude anyone who is colour blind. We assume that amoung groups the intelligence of the subjects is controled
through random assignment of the subjects to the groups.

Also, sometimes there are control conditions, in which the independent variable has no value. These are done to see
if the independent variables are really having the effect or if the effect would be achieved without them.

VALIDITY

If a method is valid or not, is a question of whether the method is achieving the stated goal or not. For example, is
the test actually testing what it is claiming to test? Test of marital fidelity by a coin flip. Such a test is probably not
valid. Similarly, the quizes in this class will not be a valid tests of athletic ability. However, I hope and intend the
quizes in this classes to be valid tests of psychological knowledge that has been presented in this course.

There is more than one means to validly study a phenomenon. For example, the different perspectives on
psychology - biological, cognitive, and behavioral - could all be valid means of studying a phenomenon, but they
would not use the same method.

RELIABILITY

Does the procedure produce similar results each time it is used, or is there a lot of variation between uses? If the data
that are collected are consistent and dependable, then the method may be said to be reliable. The behaviorists had a
problem with the method of introspection, because they believed it did not produce reliable data.

OBSERVER EFFECTS

Clever Hans was a horse that could count. Someone would ask the horse what's 2 plus 2, and Hans would stamp out
4. Ask Hans what 9 minus 7 was, and he would stamp out 2. A horse that can do arithmatic. Aren't you impressed?
Cleaver Hans is an example of an observer effect, which occur when the performance of a subject being studied can
be attributed to the presence of the experimenters, the observers.

Let me give you two other examples of observer effects. First, back in the 1920s, a Western Electric Company plant
in Chicago - the Hawthorne Works - was used in a study to assess how the workplace could be improved to increase
production. Specifically, researchers were interested in relationship between illumination levels and production.
Thus, the researchers chose a particular part of the plant where some women were assembling part of the
automobile.

The intital findings were that when the illumination in the workplace increased there was more productivity. Then
the researchers decreased the illumination levels to verify their results. They found that as the illumination levels
decreased productivity still increased. The reason for the increased productivity for both increases and decreases in
illumination is that the workers were getting more attention than before. In the '20s, the average worker wasn't really
cared for. Management didn't really give a damn, and there were few governmental policies regarding the treatment
of workers in the workplace. When several researchers attended to these workers, caring about how they were
performing, the workers responded by working more productively. That was also the case for the control group,
where the illumination was not changed.

The last observer effect that I'll tell you about is more subtle, and perhaps more serious then the Hans or Hawthorne
examples, because right minded people might think that there would be no observer effect in such a case, yet there
is. It has been found that blacks get higher scores on intelligence tests if the test is administered by a black person
than a white person. We can offer many hypotheses as to why this happens, but the key point in this context is that
the person who gives someone a task can influence their performance of that task.

Back to the lecture notes list

Psychology as Science

Is psychology a science? What is a science?


Demarcation is a word that has been used to divide up the world of ideas into those that are scientific and those are
not. The ideas that are scientific are based on an understanding the world - broadly defined - and everything -
broadly defined - in it. That understanding of the world is derived from collecting evidence about it. Collecting data,
where data are bits of empirical information. Obtaining the Truth about the world was the ultimate goal. Each
scientific activity was undertaken to get closer to the truth, if only a little bit. Scientists were believed to be blind to
anything that might interfere with their truth-seeking activity.

That all contrasts with the idea of non-scientific ideas, which might best be exemplified by religious ideas. Those
ideas are faith based rather than reality based. Non-scientific ideas are simply the way in which one feels about
something without any empirical evidence to support that feeling.

Traditionally, part of the basis for the demarcation was the idea that scientific ideas can be falsified. That means in
principle some evidence that could be gathered that would indicate one's scientific ideas are incorrect or false.
Furthermore, it was presumed that falsifying ideas was OK to scientists. Non-scientific ideas are not falsifiable. That
is, if you could collect some data that proved the non-existence of God (Lord knows how you would do that), or
some of God's teachings, and that would not shake the Pope's belief in God. Not one bit.

On the other hand, if you had some evidence that e=mc2 doesn't work, and you showed this evidence to Albert
Einstein, he would not be upset about that. He would be willing to accept the countervening data and try and revise
his theory.

Over the course of the 20th century, the view on the demarcation between scientific ideas and non-scientific ideas
has changed. Early in the 20th century, the demarcation was thought to be quite clear. Things were scientific or they
were not, and never the twain should meet. However, since then the demarcation has become rather fuzzy.

Psychology as a science.
There are natural sciences, which include physics, chemistry, and biology, and there are social sciences, which
include sociology, anthropology and economics. Where psychology fits in is a matter of some dispute. Like most
disputes, the argument can get quite heated. There are some psychologists who will get pretty angry if you suggest
that psychology is NOT a natural science. Personally, I don't think that it matters much. The abilities and the
limitations of psychology are fairly well established. Psychology is a rigorous empirical disciplne. To argue about
whether it is a natural science or a social science wastes energy that would be better spent on doing more
psychological research.

Psychology has all of the keys to scientific empiricism, because psychology is objective, as anyone can do the
studies that we do (who performs the study does not influence what is found), systematic, as the manner in which we
collect the evidence for our ideas does not change depending on how we feelt at that moment, and repeatable, as the
evidence we gather to test our ideas can be repeated.

Levels of Analysis

There are many different ways of approaching a problem. Levels of analysis is a general way of referring to these
differences on the whole, because they are different perspectives on the problem. There's (a) the macro level of
analysis (we'd be looking at the forest here), which might be looking at the relations between groups of people
around you. There's (b) the molecular level of analysis (we'd be looking at the trees here), where you would be
looking at how fast people react to the stimuli around them. And finally, there's (c) the micro level of analysis (we'd
be looking at individual leafs here), where you'd examine things like people's hormonal changes or differences in
brain activity.

Those 3 levels of analysis are content free, because they could be applied to any field. That is, the micro, molecular
and macro levels of analysis can be applied in economics, anthropology, physics, etc.

Levels of analysis can also be described on the basis of the content of the analysis. The levels of analysis within
psychology include (i) biological, (ii) cognitive, (iii) behavioral (rewards and punishments), and (iv) social. Of
course, these levels can interact with each other. Biological changes can affect cognition, which can affect
behaviour. Typically, researchers focus on only one of the levels, although they recognize that completely
understanding any phenomenon requires analysis at all levels.

Clinical Psychologists, Research Psychologists, & Psychiatrists

There are two kinds of psychologists in the world: clinical psychologists, and sane psychologists.

No, no, actually there are clinical psychologists and research psychologists. About half of psychologists are clinical
psychologists, or clinicians. The other psychologists are research psychologists, although they may have jobs in
several different fields, where research is not the primary focus of their work. Clinical psychologists may also be
researchers, but research psychologists cannot be clinicians. This is the difference between the two: whether they do
therapy or not. Clinicians can see therapy clients (patients) as well as doing research, but researchers will only do
research; they do not see therapy clients.

The relationship between clinicians and research psychology is similar to the relationship between medicine and
medical research, and between engineering and physics. Clinicans are applying psychological knowledge, just as
medical doctors are applying medical knowledge, and engineers are applying physical knowledge.

Clinical psychologists differ from psychiatrists in two important ways. One, their training is different. Clinical
psychologists have PhD degrees, which includes doing independent research, as well as coursework and practical
training. Psychiatrists do coursework as well as practical training, but they do not have to do any research. Two,
clinical psychologists canNOT prescribe drugs to treat their patients; psychiatrists CAN prescribe drugs.

There is quite a debate about whether the rules should be changed so that clinical psychologists can prescribe drugs.
Although it may seem reasonable to allow clinicians to prescribe drugs to help their clients, this would inevitably
mean an increase incorrect prescriptions, because (a) people make mistakes. If more people can prescribe drugs,
then more incorrect prescriptions will be made simply because of human error. Also, (b) clinicians who are already
practicing have not received training for prescribing drugs, which increases the risk of incorrect prescriptions. The
consequences of incorrect prescriptions for patients could be very negative.

Another difference between clinical psychology and psychiatrists is that it is harder to get in to a clinical psychology
program than to get into a medical program, and you must be in a medical program before specializing in psychiatry.
At the U of Manitoba medical school, about 400 people apply and 70 are accepted. However, about 200 are from out
of province, and there are only a few places for them in the program, so for in province applicants the acceptance
rate is about 1 in 3, or 33%. At the U of Manitoba psychology department, they have about 120-150 applicants for
their clinical psychology program, and about 6 are accepted, or 3% to 4.5%. Thus, it is more difficult to become a
clinical psychologist than a psychiatrist.

Back to Lecture Notes list

DAY ONE - FALL 1998

This is introductory psychology. I am Dr. Evan Pritchard. These are the requirements for the course: 4 tests and 2
exams.

Feel free to ask questions in class. Please do not talk while the person is asking the question or while I am answering
the question. Also, please do not talk amongst yourselves during lectures, because this can be very distracting to
other students and me.

This course is about psychology. Specifically, academic or scientific psychology, not the psychology that is
discussed in most of the books in the psychology section of a bookstore. I like looking around bookstores. I
REALLY like looking around bookstores. But even in good bookstores, I hardly ever look at the books in the
psychology section. I don't look at the psychology section, because 90% of them do not have an academic or
scientific basis. Because they do not have a scientific basis, I don't know what to make of them.

This is not to say that those books that don't have a scientific basis are worthless. You can learn something from
everything, so certainly it's possible to learn something from those books. However, those books are not as valuable
to me, as a psychologist, as books with a scientific basis, or a psychology journal. Journal articles are scrutinized by
other psychologists before they are published.

GOALS

Planning is a good thing. There's a saying that 'no one plans to fail, but many fail to plan.' To make a plan, you must
have a goal. There are two types of goals: (a) performance goals & (b) learning goals. A performance goal is to
obtain a specific level of performance, or a specific outcome. Examples of performance goals are being a
millionaire, hitting 62 home runs in a season, and getting an A in intro psych.

Learning goals do not suggest a particular level of performance, but rather suggest that people improve themselves,
and try to do as best they can. Examples of learning goals would be learning something new about psychology from
a psychology lecture, and developing better tennis skills from a practice session.

Each type of goal has been linked to a type of motivation. Learning goals are linked to intrinsic motivation, while
performance goals are linked to extrinsic motivation. Generally, people perform better when they have been given
learning goals rather than performance goals. This is especially true as the complexity of the task increases.

WORKING

Once you have set your goals, then you need to take action. Taking action is just two words for work. Working is a
good thing. For example, you will do better in this course if you work at it. If you don't work at it, you may do OK.
But I assure you that if you work at it you will do better than if you don't.

Let me show you some statistics to support my point that working is a good thing. These are the statistics for Gary
Roberts, who played for the Calgary Flames. Notice the difference in his point production in the years prior to 1991-
92 (0.61 ppg) and since 1991-92 (1.20 ppg). The difference in goals is also remarkable. Over his first 4 full NHL
seasons, Roberts averaged 22 goals a season, and in his next 4 full seasons he averaged 33 goals a season (44 if you
don't count the injury shorted 1994-95 season). Basically, Roberts's scoring doubled from years 1-4 to years 5-8.
Why might this have happened?
The reason is that there was a Canada Cup in 1991. Perhaps you are thinking "oh, so Roberts played on Team
Canada, won the Canada Cup, and that boosted his career." If you are thinking that, you are wrong, because. Roberts
was not a member of Team Canada in 1991. Oh, he did get invited to their training camp. But he failed to make the
team.

However, what Roberts saw at the training camp was that other good players, superstar players, were not only
talented but those star players worked really hard. They worked harder than he himself had been working. The star
players' achievements were not simply because they were talented, but the star players also worked at improving
their skills. Thus, after getting cut from the training camp, Roberts realized that he could be an even better hockey
player than he had been, if he worked harder. So he did. The results are remarkable.

Also, John Irving, the author of The world according to Garp and A prayer for Owen Meany among others, was a
competitive wrester when he was in high school and university. But he didn't have much talent for wrestling. Irving
wrestled, because he really liked it. His wrestling achievments were the product of hard work. Irving's high school
wrestling coach, Ted Seabrooke, told Irving that he didn't have much talent for wrestling, which sounds pretty
discouraging. But Seabrooke also told Irving "talent is overrated. That you're not very talented needn't be the end of
it."

Irving feels the same way about his writing, as he did about his wrestling. Irving doesn't think that he is a talented
writer, but he works at it, and works at it, until the final draft, which I think is pretty darn good.

CONCLUSION

Set annual, monthly, weekly, and daily goals. Then work to achieve them.

The person that is most important to your achievement is you. If you can convince yourself that you
can achieve your goals, then you are already on the way to achieving them.

Effective Goal setting:

Set specific goals (measurable)


Set difficult but realistic goals
Set long term & short term goals
Set performance goals
Record the goals set
Develop goal achievement strategies
Get goal support
Get goal evaluation

Problems in goal setting:

Failing to set specific goals


Setting too many goals too soon
Failing to adjust goals
Failing to set performance goals
No follow-up evaluation

Back to Lecture Notes list.

Você também pode gostar