Escolar Documentos
Profissional Documentos
Cultura Documentos
DATA COLLECTION
Quantitative
Qualitative
Distinction Between Qualitative and Quantitative Research Methods
Case Studies
Projective Techniques
PARTICIPATORY RAPID TECHNIQUES
Mapping; Social Mapping, Body Mapping, Participatory
Mapping
Seasonal Calendar
Venn/Institutional Diagram
Flow/causal Diagram
Time Trends
QUANTITATIVE METHODS
Postal
Electronic
Advantages
Quick recoding of answers
Easy analysis
Risks
Loss of a lot of interesting and valuable information
Interviews might try to get the response from the categories
Interviewer might only receive one answer
Interviewer might introduce the possible answer
Very little space provided for recording the response
EXAMPLES OF OPEN-ENDED
QUESTIONS
Tell me about your relationship with your supervisor.
How do you see your future?
Questions should flow from the least sensitive to the most sensitive.
Neutrality of Interviewers
= undecided
Strongly Somewhat Somewhat 2. On the whole, I get along well with others at work.
Strongly Agree
Disagree Disagree Agree
Strongly Somewhat Somewhat 4. When I feel uncomfortable at work, I know how to handle it.
Strongly Agree
Disagree Disagree Agree
Strongly Somewhat Somewhat 5. I can tell that other people at work are glad to have me there.
Strongly Agree
Disagree Disagree Agree
Strongly Somewhat Somewhat 6. I know I'll be able to cope with work for as long as I want.
Strongly Agree
Disagree Disagree Agree
Strongly Somewhat Somewhat 8. I am confident that I can handle my job without constant assistance.
Strongly Agree
Disagree Disagree Agree
Strongly Somewhat Somewhat 10. I can tell that my coworkers respect me.
Strongly Agree
Disagree Disagree Agree
QUALITATIVE TECHNIQUES
FOCUS GROUP DISCUSSION (FGD)
A Focus Group Discussion (FGD) is a group discussion of 6-12 persons
guided by a facilitator, during which group members talk freely and
spontaneously about a certain topic.
FGDs are not used to test hypotheses or to produce research findings that
can be generalized
PURPOSE OF FGD
1.To focus research and develop relevant research hypotheses by exploring
in greater depth the problem to be investigated and its possible causes.
2.To generate new ideas. A group works best to build on the ideas generated.
3.To formulate appropriate questions for more structured, large-scale
surveys.
4.To supplement information on community knowledge, beliefs, attitudes,
and behaviour already available but are incomplete or unclear. For
example, reasons for low women's participation in development
programme can be understood by a focus group discussion among
women.
5.To develop appropriate messages for the education programme.
6.To explore controversial topics.
Key Features of the FGD
Discussion details,
including Speaker
identity - supplements
tape
Back-up to moderator
SPECIFIC COMPONENTS OF FGD
Preparation
Recruitment of Participants
Physical Arrangements
Preparation of FGD Guideline
3. Building memory
4. Maintaining naivete
Non-controlled observation:
The observation is done without managing, organising and
directing the normal activities /surroundings by any internal
force, it is called non-controlled observation. This type of
observation needs to be supplemented by structured
observation or schedules of information (see Goode & Hatt,
1952)
KEY INFORMANTS (KI) INTERVIEW
Conducting Interviews
The interview should be characterized by
Silent Probe
Phased assertion.
CASE STUDIES
A fairly exhaustive study of a person or group is called a
life or case history or case study. It deepens our
perception & give us clear insight into life. Because of
its aid in studying behaviour in specific, precise detail,
Burgess termed the case study method, `the social
microscopes' (Young, 1973)
Information Bias
These affect the validity and Reliability of the study
PRETESTING AND PILOT STUDIES
Pre-test helps in
evaluating the different questions,
the language,
questionnaire format and
Interview process.
Predictive Validity
Construct Validity
FACE VALIDITY
Face validity is evaluated by a group of judges, sometimes experts,
who read or look at a measuring technique and decide whether in their
opinion it measures what its name suggests.
Evaluating the face validity is a subjective process, but we could
calculate the validity figures by computing the amount of agreement
between judges.
The higher the percent who says it measures what it claims to
measure, the higher the face validity.
Every instrument must pass the face validity test either formally or
informally.
Every researcher who chooses an instrument is a judge who has
decided that the test measures the concept he or she wishes to study.
Without such minimal face validity, an instrument would not be used.
CONCURRENT VALIDITY
Concurrent validity is the ability of a measuring instrument
to distinguish between individuals who are known to
differ.
Thus, if a scale were being devised for the purpose of
measuring religiosity, the questions could be tested by
administering them to one group known to be religious, to
be active in religious activities and otherwise to give
evidence of high religiosity. T
hese answers would then be compared with those from a
group known not to be very religious and also known to
oppose religious behaviour in other ways. If the test failed
to discriminate between the two groups, it could not be
considered to measure religiosity with validity.
PREDICTIVE VALIDITY
Predictive validity is the ability of a measuring
instrument to identify future differences.
For instance, the predictive validity of a scale measuring
attitude towards birth control is the ability of the scale to
identify who will eventually adopt contraception and
who will not practise contraception.
Predictive validity is an evaluation of a measure's
practical worth in foreseeing the future.
CONSTRUCT VALIDITY
Construct validity is an evaluation of the extent to which
an instrument measures the theoretical construct the
investigator wishes to measure.
Unlike face validity, construct validity requires more
than expert opinion. It requires a demonstration that the
construct in question exists, that it is distinct from other
constructs, and that the instrument measures that
particular construct and no other.
THE RELIABILITY OF MEASUREMENT
/INSTRUMENT
scores on measuring instruments usually reflect not only
the characteristics, which the instrument is attempting to
measure, but a variety of constant and random errors.
The evaluation of the reliability of any measurement
procedure consists of determining how much of the
variation in scores among individuals is due to
inconsistencies in measurement.
When independent and comparable measures of the same
thing are obtained, they will yield the same results to the
extent that the measurements are free from random or
variable errors.
The reliability of a measuring instrument should be
determined before it is used in a study