Você está na página 1de 73

Social Research

Tool kit
1
CONTENT
Social Research and Research Process - 01
What is Research?
Why do we research?
What is a theory?
Types of Research – Definitions
What is social research?
Scope of Social Research
Objectives of the Research Proposal
Why should research objectives be developed?
Important Concepts - 02
Variable:
Types of Variables
Measurement Scales
Operational definition
When Operational Definition is used?
Operational definition: How is it made?
Hypotheses
What is a hypothesis?
Formulation of hypotheses
Types and Forms of Hypotheses
Research Design
Formulating a Research Problem - 03
Identification of a research problem
Role of Identification in the Planning Process
Performing Identification

Literature Review - 04
Introduction
Components of Literature Review
Why do we write literature reviews?
Before writing the literature review
Strategies for Writing the Literature Review:

Tools of Data - 05
Classification of Data
Primary Data Collection
Questionnaires
Interviews
Observation
Case-studies
Diaries
Critical incidents
Portfolios
Secondary Data Collection
Coding Qualitative Data

Sampling - 06
What is a sample?
What is sampling?
What is the purpose of sampling?
Bias and error in sampling
Types of samples
Purposeful sampling
Sample size

2
Specific Application - 07
Baseline Studies
Need assessment
Action Research
The Action Research Process
Principles of Action Research
Evaluation research
What is evaluation research?
Types of evaluation research
Evaluation methods
Cost Benefit Analysis

Qualitative Research -08


Characteristics
Maintaining The Validity Of Qualitative Research
Assumptions Underlying Qualitative Methods
Qualitative Methods
Concept Mapping
Sampling
Data Collection Methods

3
About Toolkit
You can use this toolkit to find out about:
 Social Research
 Questionnaire
 Review of Literature
 Sampling
 Qualitative Research

HOW TO USE THIS TOOLKET


Social Research
Social research may be defined as a scientific undertaking which, by means of logical and
systematized techniques.
Go to Chapter – 01

Review of Literature
A literature review discusses published information in a particular subject area, and sometimes
information in a particular subject area within a certain time period
Go to Chapter – 04

Questionnaire
Questionnaires are a popular means of collecting data.
Go to Chapter - 05

Sampling
Sampling is the act, process, or technique of selecting a suitable sample, or a representative part
of a population for the purpose of determining parameters or characteristics of the whole
population.
Go to Chapter – 06

Qualitative Research
Qualitative Research is collecting, analyzing, and interpreting data by observing what people do
and say.
Go to Chapter - 08

4
Social Research and
Research Process

Chapter - 01

5
What is Research?

The process of gathering information for the purpose of initiating, modifying or terminating a
particular investment or group of investments.

Research means:
 careful or diligent search.
 studious inquiry or examination; especially : investigation or experimentation aimed at
the discovery and interpretation of facts, revision of accepted theories or laws in the light
of new facts, or practical application of such new or revised theories or law.
 the collecting of information about a particular subject.

Why do we research?

Research deals with problems, attitudes and opinions. With research we seek to understand why
things behave the way they do, why people act in a certain way, or what makes people buy a
particular product.

Research attempts to seek answers to questions. It draws conclusions from the information
(commonly referred to as data) gathered. Often, the conclusions are generalised (this means the
basic principles discovered could be applied to other areas that were not studied). Research adds
to the existing body of knowledge. It attempts to improve our understanding of the world in
which we live.

What is a theory?

A theory is an explanation of some property that attempts to explain its behaviour or


characteristics. There have been many theories proposed throughout our history. One such theory
was that the earth was flat. Another is the theory of evolution. Theories attempt to explain that
which we seek to understand. Often, future advances in technology or thinking refutes theories
that held sway for generations. Research is a never-ending process of continual investigation.

Theories shape our view of the world and of ourselves. We are affected by the theories of the
past and present. We live in a world that is explained by theories, and these theories are espoused
in all that we read and see (via the medium of television and print). In fact, we are exposed to
these theories from an early age, and are indoctrinated into them throughout the educational
system.

A theory is a good theory if it satisfies a number of requirements:


Validity - The theory fits the facts.
Generalisation - It can make predictions about the results of future observations
Replication - It can be duplicated with the same results

Types of Research – Definitions

Action research is a methodology that combines action and research to examine specific
questions, issues or phenomena through observation and reflection, and deliberate intervention to
improve practice.
6
Applied research is research undertaken to solve practical problems rather than to acquire
knowledge for knowledge sake.

Basic research is experimental and theoretical work undertaken to acquire new knowledge
without looking for long-term benefits other than the advancement of knowledge.

Clinical trials are research studies undertaken to determine better ways to prevent, screen for,
diagnose or treat diseases.

Epidemiological research is concerned with the description of health and welfare in populations
through the collection of data related to health and the frequency, distribution and determinants
of disease in populations, with the aim of improving health.

Evaluation research is research conducted to measure the effectiveness or performance of a


program, concept or campaign in achieving its objectives.

Literature review is a critical examination, summarisation, interpretation or evaluation of


existing literature in order to establish current knowledge on a subject.

Qualitative research is research undertaken to gain insights concerning attitudes, beliefs,


motivations and behaviours of individuals to explore a social or human problem and include
methods such as focus groups, in-depth interviews, observation research and case studies.

Quantitative research is research concerned with the measurement of attitudes, behaviours and
perceptions and includes interviewing methods such as telephone, intercept and door-to-door
interviews as well as self-completion methods such as mail outs and online surveys.

Service or program monitoring and evaluation involves collecting and analysing a range of
processes and outcome data in order to assess the performance of a service or program and to
determine if the intended or expected results have been achieved.

What is social research?

Social research may be defined as a scientific undertaking which, by means of logical and
systematized techniques, aims to:
 discover new facts or verify and test old facts
 Analyze their sequence, interrelationships, and casual explanations which were derived
within an appropriate theoretical framework of reference.
 Develop new scientific tools, concepts and theories which would facilitate reliable and
valid study of human behaviour.

Social research is a systematic method of exploring analyzing and conceptualising social life in
order to “extend, correct or verify knowledge, whether that knowledge aid in the construction of
a theory or in the practice of an art”

Social research seeks to find explanations to unexplained social phenomena, to clarify the
doubtful and correct the misconceived facts of social life.
7
Social Research presents:
1. A systematic method of exploring actual persons and groups, focused primarily on their
experiences within their social worlds, inclusive of social attitudes and values.
2. The mode of analysis of these experiences which permit stating propositions in the form.

Scope of Social Research

Social researchers should use the possibilities open to them to extend the scope of social enquiry
and communicate their findings, for the benefit of the widest possible community.

Social researchers develop and use concepts and techniques for the collection, analysis or
interpretation of data. Although they are not always in a position to determine their work or the
way in which their data are ultimately disseminated and used, they are frequently able to
influence these matters

Academic researchers enjoy probably the greatest degree of autonomy over the scope of their
work and the dissemination of the results. Even so, they are generally dependent on the decision
of funding agencies on the one hand and journal editors on the other for the direction and
publication of their enquiries.

Social researchers employed in the public sector and those employed in commerce and industry
tends to have less autonomy over what they do or how their data are utilised. Rules of secrecy
may apply; pressure may be exerted to withhold or delay the publication of findings (or of
certain findings); inquiries may be introduced or discontinued for reasons that have little to do
with technical considerations. In these cases the final authority for decisions about an inquiry
may rest with the employer or client.

Professional experience in many countries suggests that social researchers are most likely to
avoid restrictions being placed on their work when they are able to stipulate in advance the issues
over which they should maintain control. Government researchers may, for example, gain
agreement to announce dates for publication for various statistical series, thus creating an
obligation to publish the data on the due dates regardless of intervening political factors.
Similarly, researchers in commercial contracts may specify that control over at least some of the
findings (or details of methods) will rest in their hands rather than with their clients. The greatest
problems seem to occur when such issues remain unresolved until the data emerge.

Objectives of the Research Proposal

The objectives of a research project summaries what is to be achieved by the study.

Objectives should be closely related to the statement of the problem. For example, if the problem
identified is low utilisation of child welfare clinics, the general objective of the study could be to
identify the reasons for this low utilisation, in order to find solutions.
The general objective of a study states what researchers expect to achieve by the study in
general terms.

8
It is possible (and advisable) to break down a general objective into smaller, logically connected
parts. These are normally referred to as specific objectives.

Specific objectives should systematically address the various aspects of the problem as defined
under ‘Statement of the Problem’ and the key factors that are assumed to influence or cause the
problem. They should specify what you will do in your study, where and for what purpose

Why should research objectives be developed?

The formulation of objectives will help you to:


Focus the study (narrowing it down to essentials);
Avoid the collection of data which are not strictly necessary for understanding and solving the
problem you have identified; and
Organise the study in clearly defined parts or phases.

Properly formulated, specific objectives will facilitate the development of your research
methodology and will help to orient the collection, analysis, interpretation and utilisation of data.

How should you state your objectives?

Take care that the objectives of your study:

 Cover the different aspects of the problem and its contributing factors in a coherent way
and in a logical sequence;
 Are clearly phrased in operational terms, specifying exactly what you are going to do,
where, and for what purpose;
 Are realistic considering local conditions; and
 Use action verbs that are specific enough to be evaluated.

Keep in mind that when the project is evaluated, the results will be compared to the objectives. If
the objectives have not been spelled out clearly, the project cannot be evaluated

9
Important Concept

Chapter – 02

10
Variables, Respondents, Operational Definition, Hypothesis, Research Design

Variable:

Understanding the nature of variable is essential to statistical analysis. Different data types
demand discrete treatment. Using the appropriate statistical measures to both describes your data
and to infer meaning from your data require that you clearly understand distinguishing
characteristics.

Variable - a characteristic that takes on different values/conditions for different individuals.

Types of Variables

Independent Variable - a variable that affects the dependent variable under study and is included
in the research design so that its effects can be determined. (Also known as a predictor variable
in certain types of research.)

Levels of the Variable - describes how many different values or categories an independent has in
a research design.

Dependent Variable - a variable being affected or assumed to be affected by an independent


variable. (Variable used to measure the effects of independent variables. Also known as an
outcome variable in certain types of research.)

Organismic Variable - a preexisting characteristic of an individual that cannot be randomly


assigned to that individual (e.g. gender). Serve as control variables only when effects are
known/predetermined.

Intervening Variable - a variable whose existence is inferred, but which cannot be manipulated
or directly measured. Also known as nuisance variables, mediator variables, or confounding
variables.

Control Variable - an independent variable not of primary interest whose effects are determined
by the researcher. (May be included in the research design to help explain variation in results.)

Moderator Variable - a variable that may or may not be controlled, but has an effect on the
research situation.

 when controlled - control variable (effects are known)


 when uncontrolled - intervening variable (effects unknown)

Measurement Scales

Measurement scales refer to how we attempt to capture the ways in which units of analysis differ
from one another in relation to a particular variable. There are four basic measurement scales that
become respectively more precise: nominal ordinal, interval and ratio. The precision of each type
is directly related to the type of statistical test that can be performed on them. The more precise
the measurement scale, the more sophisticated the statistical analysis can be.
11
Nominal - measurement scale in which numbers are used as names of categories; i.e.,
categorizes without order. (Frequency data)

Ordinal - measurement scale that categorizes and indicates relative amount or rank-order of a
characteristic. (Ordered data)

Interval - measurement scale that categorizes, indicates relative amount or rank-order of a


characteristic, and has units of equal distance between consecutive points on the scale. (Score
data)

Ratio - measurement scale that categorizes, indicates relative amount or rank-order of a


characteristic, has units of equal distance between consecutive points on the scale, and compares
terms as ratios of one to another (i.e. has a true zero point). Rarely used in social science
research.

Operational definition

Operational definition - a definition expressed in terms of the processes or operations and


conditions that are being used to measure the characteristic under study.

An operational definition, when applied to data collection, is a clear, concise detailed definition
of a measure. The need for operational definitions is fundamental when collecting all types of
data. It is particularly important when a decision is being made about whether something is
correct or incorrect, or when a visual check is being made where there is room for confusion. For
example, data collected will be erroneous if those completing the checks have different views of
what constitutes a fault at the end of a glass panel production line. Defective glass panels may be
passed and good glass panels may be rejected. Similarly, when invoices are being checked for
errors, the data collection will be meaningless if the definition of an error has not been specified.
When collecting data, it is essential that everyone in the system has the same understanding and
collects data in the same way. Operational definitions should therefore be made before the
collection of data begins.

When Operational Definition is used?

Any time data is being collected, it is necessary to define how to collect the data. Data that is not
defined will usually be inconsistent and will give an erroneous result. It is easy to assume that
those collecting the data understand what and how to complete the task. However, people have
different opinions and views, and these will affect the data collection. The only way to ensure
consistent data collection is by means of a detailed operational definition that eliminates
ambiguity

Operational definition: How is it made?

Identify the characteristic of interest.


Identify the characteristic to be measured or the defect type of concern.

12
Select the measuring instrument.
The measuring instrument is usually either a physical piece of measuring equipment such as a
micrometer, weighing scale, or clock; or alternatively, a visual check. Whenever a visual check
is used, it is necessary to state whether normal eyesight is to be used or a visual aid such as a
magnifying glass. In the example, normal eyesight is sufficient. On some occasions, it may also
be necessary to state the distance the observer should be from the item being checked. In general,
the closer the observer, the more detail will be seen. In the example, a clear visual indication is
given of acceptable and unacceptable, so the observer needs to be in a position where the
decision can be made. When completing a visual check, the type of lighting may also need to be
specified. Certain colors and types of light can make defects more apparent.

Describe the test method.


The test method is the actual procedure used for taking the measurement. When measuring time,
the start and finish points of the test need to be specified. When taking any measurement, the
degree of accuracy also needs to be stated. For instance, it is important to know whether time
will be measured in hours, minutes, or seconds.

State the decision criteria.


The decision criteria represents the conclusion of the test. Does the problem exist? Is the item
correct? Whenever a visual check is used, a clear definition of acceptable versus unacceptable is
essential. Physical examples or photographs of acceptable and unacceptable, together with
written support, are the best definitions.

Document the operational definition.


It is important that the operational definition is documented and standardized. Definitions should
be included in training materials and job procedure sheets. The results of steps 1 through 4
should be included in one document. The operational definition and the appropriate standards
should be kept at the work station.

Test the operational definition.


It is essential to test the operational definition before implementation. Input from those that are
actually going to complete the tests is particularly important. The operational definition should
make the task clear and easy to perform. The best way to test an operational definition is to ask
different people to complete the test on several items by following the operational definition.
Watch how they perform the test. Are they completing the test as expected? Are the results
consistent? Are the results correct?

Hypotheses

What is a hypothesis?
When we set our aims, we try to achieve them first of all by means available for us, by our
knowledge. If we succeed, it was simply a task, since we only applied our existing knowledge. It
often happens, however, that in our background knowledge there are no proper tools to achieve
the aim, our knowledge is not enough, and we face a problem. Problems occurring in science are
calling for a mental search, the subject of which is some new knowledge that is suitable for
solving the problem. In order to find this knowledge, instead of searching for it only at random,
we must have some preconception, some idea about what we are looking for. Conjecture in fact
means anticipating a possible solution to the problem. A hypothesis (in Greek 'hipothesis',
13
meaning 'foundation' or 'basis') is a particular kind of conjecture that clearly formulates a
suggestion about the solution to a certain problem. It includes surplus knowledge by which we
move beyond the previous level of knowledge. In a hypothesis a new idea is formulated; in other
words, every hypothesis includes something new and original, an element of imagination and
guessing. For this advantageous feature, however, we must pay by taking the risk of error
Research problem statement - by itself provides general direction for the study--it does not
include all the specific information.

Hypothesis - a conjecture or proposition about the solution to a problem, the relationship of


two/more variables, or the nature of some phenomenon (i.e. an educated guess based on
available fact).

A good hypothesis should:

 state an expected relationship between two or more variables


 be based on either theory or evidence (and worthy of testing)
 be testable
 be as brief as possible consistent with clarity
 be stated in declarative form
 be operational by eliminating ambiguity in the variables or proposed relationships

Formulation of hypotheses

When we formulate a hypothesis we would like to say something new, original and shocking.
Originality is a relative definition, just like truth. Striving for new knowledge suggests two
epistemological imperatives mutually excluding each other:

 a researcher should strive for truth


 a researcher should strive for original knowledge.

Striving for truth means compatibility with the background knowledge, whereas originality
means incompatibility with the same background knowledge, or calls at least for avoiding
knowledge that is true in the ordinary sense. The paradox can be settled: striving for originality is
a requirement valid for formulating a hypothesis, while striving for truth is the principle of the
evaluation stage of a hypothesis.

Types and Forms of Hypotheses

Research (Substantive) Hypothesis - simple declarative statement of the hypothesis guiding the
research.

Statistical Hypothesis:

 a statement of the hypothesis given in statistical terms.


 a statement about one or more parameters that are measures of the population under
study.
 a translation of the research hypothesis into a statistically meaningful relationship.

14
Null Hypothesis - a statistical hypothesis stated specifically for testing (which reflects the no
difference situation).

Alternative Hypothesis - an alternative to the null hypothesis that reflects a significant difference
situation.

Directional Hypothesis - a hypothesis that implies the direction of results.

Nondirectional Hypothesis - a hypothesis that does not imply the direction of results.

Foreshadowed Problems - (in ethnographic research) statements of specific research problems


that provide a focus for the research. They identify factors for the researcher to consider without
specifying anticipated results.

Research Questions - relatively narrow, specific delineations of what the proposed research will
address. Questions emerge from the researcher's topic of interest plus information gathered
during the literature review.

 Emerge from either theory or evidence (literature review)


 Operationalize the research problem by identifying variables and/or the relationships
among variables
 Be answerable

Research Design

A research design is the logical and systematic planning and directing of a piece of research. The
design from translating a general scientific model into varied research procedures. The design
has to be geared to the available time, energy and money: to the availability of data: to the extent
to which it is desirable or possible to impose upon persons and social organization which might
supply the data. In other words, a study design is tentative. As the study progresses, new aspects,
new connecting links in the data come to light and it is necessary to change the plan as
circumstances demand.

The most meaningful and revealing studies are those that are conceived from a definite point of
view, but the views are modified as necessary in the process of study, as well as those that are
dominated by a definite set of scientific interests which can be enlarged or curtailed, as the study
in process requires.

For smoothly operating research pattern – a routine procedure which is at one and the same time
practical for administrative purposes in applied research and rigorous as to scientific
prescription:
1. Prompt attention to problem needing study.
2. Personal contacts and discussion with top executives involve in the problems of study.
3. “Scouting around” in order to observe, inspect, examine, survey in a preliminary and later
in a general way the problems and situations of the study.
4. Informal interviews with enlisted men in selected camps
5. Preliminary, but lengthy, discussions with staff about the data obtained by them
15
6. Drafting questionnaires and schedules
7. Presenting questionnaire and schedules
8. Examination of results of pretests to detect and eliminate inconsistencies, obscurities and
vagueness
9. Drafting revised questionnaire and schedules
10. Conference with initiator of request for a study to ensure clearness and completeness and
proposed study.
11. Drafting final questionnaires and schedules
12. Outlining field interviews
13. Analysing collected data
14. Drafting of final report.

Operating plans in an extensive research study should include such considerations as:
1. Length of time required to produce questionnaires and other similar devices in tested
form.
2. Manner of selection and training of research personnel and orientation of
collaborations in an integrated research project
3. Cost of supplies, equipment, tabulation forms, printing of questionnares, drafting of
charts, graphs and maps
4. time needed for consultations and conferences with collaborators and committees
5. execution of the study in relation to its scope, objectives, resources
6. coordination with other related studies.

16
Formulating a
Research Problem

Chapter – 04

17
Identification of a research problem

Selection of a research problem

 Select a research topic


 State the research problem from information gathered pertaining to the selected topic

The research problem should:

 be of interest to the researcher and at least some segment of the educational community
 ensure some degree of originality (or extension of existing knowledge)
 have theoretical and/or practical educational significance
 add to existing knowledge in a meaningful way
 be researchable, feasible, and ethical

Sources for research problems/topics:

 research interest of professor/mentor


 discussion with other graduate students
 researcher's professional experience & interests
 current issues in education or social sciences
 research & professional literature

Here general guidelines which will help in selecting research problem:


 Role of Identification in the Planning Process
 Performing Identification
 Integrating Identification Results
 Reporting Identification Results
 Recommended Sources of Technical Information

Role of Identification in the Planning Process

Identification is undertaken for the purpose of locating historic properties and is composed of a
number of activities which include, but are not limited to archival research, informant interviews,
field survey and analysis. Combinations of these activities may be selected and appropriate levels
of effort assigned to produce a flexible series of options. Generally identification activities will
have multiple objectives, reflecting complex management needs. Within a comprehensive
planning process, identification is normally undertaken to acquire property-specific information
needed to refine a particular historic context or to develop any new historic contexts. (See the
Guidelines for Preservation Planning for discussion of information gathering to establish plans
and develop historic contexts.) The results of identification activities are then integrated into the
planning process so that subsequent activities are based on the most up-to-date information.
Identification activities are also undertaken in the absence of a comprehensive planning process,
most frequently as part of a specific land use or development project. Even lacking a formally
developed preservation planning process, the benefits of efficient, goal-directed research may be
obtained by the development of localised historic contexts, suitable in scale for the project areas,
as part of the background research which customarily occurs before field survey efforts.
18
Performing Identification

1. Research Design
Identification activities are essentially research activities for which a statement of objectives or
research design should be prepared before work is performed. Within the framework of a
comprehensive planning process, the research design provides a vehicle for integrating the
various activities performed during the identification process and for linking those activities
directly to the goals and the historic context(s) for which those goals were defined. The research
design stipulates the logical integration of historic context(s) and field and laboratory
methodology. Although these tasks may be performed individually, they will not contribute to
the greatest extent possible in increasing information on the historic context unless they relate to
the defined goals and to each other. Additionally, the research design provides a focus for the
integration of interdisciplinary information. It ensures that the linkages between specialised
activities are real, logical and address the defined research questions. Identification activities
should be guided by the research design and the results discussed in those terms. (See Reporting
Identification Results.)

The research design should include the following:

Objectives of the identification activities. For example: to characterise the range of historic
properties in a region; to identify the number of properties associated with a context; to gather
information to determine which properties in an area are significant. The statement of objectives
should refer to current knowledge about the historic contexts or property types, based on
background research or assessments of previous research. It should clearly define the physical
extent of the area to be investigated and the amount and kinds of information to be gathered
about properties in the area.

Methods to be used to obtain the information. For example: archival research or field survey.
Research methods should be clearly and specifically related to research problems.
Archival research or survey methods should be carefully explained so that others using the
gathered information can understand how the information was obtained and what its possible
limitations or biases are. The methods should be compatible with the past and present
environmental character of the geographical area under study and the kinds of properties most
likely to be present in the area.

The expected results and the reason for those expectations. Expectations about the kind,
number, location, character and condition of historic properties are generally based on a
combination of background research, proposed hypotheses, and analogy to the kinds of
properties known to exist in areas of similar environment or history.

2. Archival Research
Archival or background research is generally undertaken prior to any field survey. Where
identification is undertaken as part of a comprehensive planning process, background research
may have taken place as part of the development of the historic contexts (see the Guidelines for
Preservation Planning). In the absence of previously developed historic contexts, archival
research should address specific issues and topics. It should not duplicate previous work. Sources
should include, but not be limited to, historical maps, atlases, tax records, photographs, folk life
19
documentation, oral histories and other studies, as well as standard historical reference works, as
appropriate for the research problem.
3. Field Survey
The variety of field survey techniques available, in combination with the varying levels of effort
that may be assigned, give great flexibility to implementing field surveys. It is important that the
selection of field survey techniques and level of effort be responsive to the management needs
and preservation goals that direct the survey effort.

Survey techniques may be loosely grouped into two categories, according to their results. First
are the techniques that result in the characterisation of a region's historic properties. Such
techniques might include "windshield" or walk-over surveys, with perhaps a limited use of sub-
surface survey. For purposes of these Guidelines, this kind of survey is termed a
"reconnaissance." The second category of survey techniques is those that permit the
identification and description of specific historic properties in an area; this kind of survey effort
is termed "intensive." The terms "reconnaissance" and "intensive" are sometimes defined to
mean particular survey techniques, generally with regard to prehistoric sites. The use of the terms
here is general and is not intended to redefine the terms as they are used elsewhere.

Reconnaissance survey might be most profitably employed when gathering data to refine a
developed historic context—such as checking on the presence or absence of expected property
types, to define specific property types or to estimate the distribution of historic properties in an
area. The results of regional characterisation activities provide a general understanding of the
historic properties in a particular area and permit management decisions that consider the
sensitivity of the area in terms of historic preservation concerns and the resulting implications for
future land use planning. The data should allow the formulation of estimates of the necessity,
type and cost of further identification work and the setting of priorities for the individual tasks
involved. In most cases, areas surveyed in this way will require re-survey if more complete
information is needed about specific properties.

A reconnaissance survey should document:


 The kinds of properties looked for;
 The boundaries of the area surveyed;
 The method of survey, including the extent of survey coverage;
 The kinds of historic properties present in the surveyed area;
 Specific properties that were identified, and the categories of information collected; and
 Places examined that did not contain historic properties.

Intensive survey is most useful when it is necessary to know precisely what historic properties
exist in a given area or when information sufficient for later evaluation and treatment decisions is
needed on individual historic properties. Intensive survey describes the distribution of properties
in an area; determines the number, location and condition of properties; determines the types of
properties actually present within the area; permits classification of individual properties; and
records the physical extent of specific properties. An intensive survey should document:
The kinds of properties looked for;
 The boundaries of the area surveyed;
 The method of survey, including an estimate of the extent of survey coverage;
 A record of the precise location of all properties identified; and

20
 Information on the appearance, significance, integrity and boundaries of each property
sufficient to permit an evaluation of its significance.

4. Sampling
Reconnaissance or intensive survey methods may be employed according to a sampling
procedure to examine less-than-the-total project or planning area.

Sampling can be effective when several locations are being considered for an undertaking or
when it is desirable to estimate the cultural resources of an area. In many cases, especially where
large land areas are involved, sampling can be done in stages. In this approach, the results of the
initial large area survey are used to structure successively smaller, more detailed surveys. This
"nesting" approach is an efficient technique since it enables characterisation of both large and
small areas with reduced effort. As with all investigative techniques, such procedures should be
designed to permit an independent assessment of results.

Various types of sample surveys can be conducted, including, but not limited to: random,
stratified and systematic. Selection of sample type should be guided by the problem the survey is
expected to solve, the nature of the expected properties and the nature of the area to be surveyed.
Sample surveys may provide data to estimate frequencies of properties and types of properties
within a specified area at various confidence levels. Selection of confidence levels should be
based upon the nature of the problem the sample survey is designed to address.

Predictive modelling is an application of basic sampling techniques that projects or extrapolates


the number, classes and frequencies of properties in un-surveyed areas based on those found in
surveyed areas. Predictive modelling can be an effective tool during the early stages of planning
an undertaking, for targeting field survey and for other management purposes. However, the
accuracy of the model must be verified; predictions should be confirmed through field testing
and the model redesigned and re-tested if necessary.

Special survey techniques

Special survey techniques may be needed in certain situations.

Remote sensing techniques may be the most effective way to gather background environmental
data, plan more detailed field investigations, discover certain classes of properties, map sites,
locate and confirm the presence of predicted sites, and define features within properties. Remote
sensing techniques include aerial, subsurface and underwater techniques. Ordinarily the results
of remote sensing should be verified through independent field inspection before making any
evaluation or statement regarding frequencies or types of properties.

Integrating Identification Results

The results of identification efforts must be integrated into the planning process so that planning
decisions are based on the best available information. The new information is first assessed
against the objectives of the identification efforts to determine whether the gathered information
meets the defined identification goals for the historic context(s); then the goals are adjusted
accordingly. In addition, the historic context narrative, the definition of property types and the

21
planning goals for evaluation and treatment are all adjusted as necessary to accommodate the
new data.

Reporting Identification Results

Reporting of the results of identification activities should begin with the statement of objectives
prepared before undertaking the survey. The report should respond to each of the major points
documenting:

Objectives
 Area researched or surveyed;
 Research design or statement of objectives;
 Methods used, including the intensity of coverage. If the methods differ from those
outlined in the statement of objectives, the reasons should be explained.
 Results: how the results met the objectives; result analysis, implications and
recommendations; where the compiled information is located.

A summary of the survey results should be available for examination and distribution. Identified
properties should then be evaluated for possible inclusion in appropriate inventories.

Protection of information about archeological sites or other properties that may be threatened by
dissemination of that information is necessary. These may include fragile archeological
properties or properties such as religious sites, structures, or objects, whose cultural value would
be compromised by public knowledge of the property's location.

The problem statement must provide adequate focus and direction for the research. It should
also identify key variables of the study and suggest the appropriate methodology for the study.

22
Review of Literature

Chapter - 05

23
Literature Review

A literature review is an account of what has been published on a topic by accredited scholars
and researchers. Occasionally you will be asked to write one as a separate assignment
(sometimes in the form of an annotated bibliography--see the bottom of the next page), but
more often it is part of the introduction to an essay, research report, or thesis. In writing the
literature review, your purpose is to convey to your reader what knowledge and ideas have been
established on a topic, and what their strengths and weaknesses are. As a piece of writing, the
literature review must be defined by a guiding concept (e.g., your research objective, the problem
or issue you are discussing, or your argumentative thesis). It is not just a descriptive list of the
material available, or a set of summaries.

Besides enlarging your knowledge about the topic, writing a literature review lets you gain and
demonstrate skills in two areas:

1. information seeking: the ability to scan the literature efficiently, using manual or
computerized methods, to identify a set of useful articles and books
2. critical appraisal: the ability to apply principles of analysis to identify unbiased and
valid studies.

A literature review must do these things:


 be organized around and related directly to the thesis or research question you are
developing
 synthesize results into a summary of what is and is not known
 identify areas of controversy in the literature
 formulate questions that need further research

In another words

A literature review discusses published information in a particular subject area, and sometimes
information in a particular subject area within a certain time period.

A literature review can be just a simple summary of the sources, but it usually has an
organizational pattern and combines both summary and synthesis. A summary is a recap of the
important information of the source, but a synthesis is a re-organization, or a reshuffling, of that
information. It might give a new interpretation of old material or combine new with old
interpretations. Or it might trace the intellectual progression of the field, including major debates.
And depending on the situation, the literature review may evaluate the sources and advise the
reader on the most pertinent or relevant.

Components of Literature Review

Similar to primary research, development of the literature review requires four stages:

 Problem formulation—which topic or field is being examined and what are its component
issues?
 Literature search—finding materials relevant to the subject being explored

24
 Data evaluation—determining which literature makes a significant contribution to the
understanding of the topic
 Analysis and interpretation—discussing the findings and conclusions of pertinent
literature

Literature reviews should comprise the following elements:

 An overview of the subject, issue or theory under consideration, along with the objectives
of the literature review
 Division of works under review into categories (e.g. those in support of a particular
position, those against, and those offering alternative theses entirely)
 Explanation of how each work is similar to and how it varies from the others
 Conclusions as to which pieces are best considered in their argument, are most
convincing of their opinions, and make the greatest contribution to the understanding and
development of their area of research

In assessing each piece, consideration should be given to:

 Provenance—What are the author's credentials? Are the author's arguments supported by
evidence (e.g. primary historical material, case studies, narratives, statistics, recent
scientific findings)?
 Objectivity—Is the author's perspective even-handed or prejudicial? Is contrary data
considered or is certain pertinent information ignored to prove the author's point?
 Persuasiveness—Which of the author's theses are most/least convincing?
 Value—Are the author's arguments and conclusions convincing? Does the work
ultimately contribute in any significant way to an understanding of the subject?

Why do we write literature reviews?

Literature reviews provide you with a handy guide to a particular topic. If you have limited time
to conduct research, literature reviews can give you an overview or act as a stepping stone. For
professionals, they are useful reports that keep them up to date with what is current in the field.
For scholars, the depth and breadth of the literature review emphasizes the credibility of the
writer in his or her field. Literature reviews also provide a solid background for a research
paper's investigation. Comprehensive knowledge of the literature of the field is essential to most
research papers.

Before writing the literature review

Clarify

If your assignment is not very specific, seek clarification from your instructor:

 Roughly how many sources should you include?


 What types of sources (books, journal articles, and websites)?
 Should you summarize, synthesize, or critique your sources by discussing a common
theme or issue?

25
 Should you evaluate your sources?
 Should you provide subheadings and other background information, such as definitions
and/or a history?

Find models

Look for other literature reviews in your area of interest or in the discipline and read them to get
a sense of the types of themes you might want to look for in your own research or ways to
organize your final review. You can simply put the word "review" in your search engine along
with your other topic terms to find articles of this type on the Internet or in an electronic
database. The bibliography or reference section of sources you've already read are also excellent
entry points into your own research.

Narrow your topic

There are hundreds or even thousands of articles and books on most areas of study. The narrower
your topic, the easier it will be to limit the number of sources you need to read in order to get a
good survey of the material. Your instructor will probably not expect you to read everything
that's out there on the topic, but you'll make your job easier if you first limit your scope.
And don't forget to tap into your professor's (or other professors') knowledge in the field. Ask
your professor questions such as: "If you had to read only one book from the 70's on topic X,
what would it be?" Questions such as this help you to find and determine quickly the most
seminal pieces in the field.

Consider whether your sources are current

Some disciplines require that you use information that is as current as possible. In the sciences,
for instance, treatments for medical problems are constantly changing according to the latest
studies. Information even two years old could be obsolete. However, if you are writing a review
in the humanities, history, or social sciences, a survey of the history of the literature may be what
is needed, because what is important is how perspectives have changed through the years or
within a certain time period. Try sorting through some other current bibliographies or literature
reviews in the field to get a sense of what your discipline expects. You can also use this method
to consider what is "hot" and what is not.

Strategies for Writing the Literature Review:

Find a focus

A literature review, like a term paper, is usually organized around ideas, not the sources
themselves as an annotated bibliography would be organized. This means that you will not just
simply list your sources and go into detail about each one of them, one at a time. No. As you
read widely but selectively in your topic area, consider instead what themes or issues connect
your sources together. Do they present one or different solutions? Is there an aspect of the field
that is missing? How well do they present the material and do they portray it according to an
appropriate theory? Do they reveal a trend in the field? A raging debate? Pick one of these
themes to focus the organization of your review.

26
Construct a working thesis statement

Then use the focus you've found to construct a thesis statement. Yes! Literature reviews have
thesis statements as well! However, your thesis statement will not necessarily argue for a
position or an opinion; rather it will argue for a particular perspective on the material. Some
sample thesis statements for literature reviews are as follows:

The current trend in treatment for congestive heart failure combines surgery and medicine.
More and more cultural studies scholars are accepting popular media as a subject worthy of
academic consideration.

Consider organization

You've got a focus, and you've narrowed it down to a thesis statement. Now what is the most
effective way of presenting the information? What are the most important topics, subtopics, etc.,
that your review needs to include? And in what order should you present them? Develop an
organization for your review at both a global and local level:

First, cover the basic categories

Just like most academic papers, literature reviews also must contain at least three basic elements:
 an introduction or background information section
 the body of the review containing the discussion of sources
 a conclusion and/or recommendations section to end the paper.

Introduction: Gives a quick idea of the topic of the literature review, such as the central theme or
organizational pattern.
Body: Contains your discussion of sources and is organized either chronologically, thematically,
or methodologically (see below for more information on each).
Conclusions/Recommendations: Discuss what you have drawn from reviewing literature so far.
Where might the discussion proceed?

Organizing the body

Once you have the basic categories in place, then you must consider how you will present the
sources themselves within the body of your paper. Create an organizational method to focus this
section even further.

To help you come up with an overall organizational framework for your review, consider the
following scenario and then three typical ways of organizing the sources into a review:

Chronological:
If your review follows the chronological method, you could write about the materials above
according to when they were published.

By Publication
Order your sources by publication chronology, then, only if the order demonstrates a more
important trend. For instance, you could order a review of literature on biological studies of
27
sperm whales if the progression revealed a change in dissection practices of the researchers who
wrote and/or conducted the studies.

By Trend
A better way to organize the above sources chronologically is to examine the sources under
another trend, such as the history of whaling. Then your review would have subsections
according to eras within this period. For instance, the review might examine whaling from pre-
1600-1699, 1700-1799, and 1800-1899. Under this method, you would combine the recent
studies, even though the authors wrote a century apart.

Thematic:
Thematic reviews of literature are organized around a topic or issue, rather than the progression
of time. However, progression of time may still be an important factor in a thematic review. A
review organized in this manner would shift between time periods within each section according
to the point made.

Methodological:
A methodological approach differs from the two above in that the focusing factor usually does
not have to do with the content of the material. Instead, it focuses on the "methods" of the
researcher or writer. A methodological scope will influence either the types of documents in the
review or the way in which these documents are discussed.
Once you've decided on the organizational method for the body of the review, the sections you
need to include in the paper should be easy to figure out. They should arise out of your
organizational strategy. In other words, a chronological review would have subsections for each
vital time period. A thematic review would have subtopics based upon factors that relate to the
theme or issue.
Sometimes, though, you might need to add additional sections that are necessary for your study,
but do not fit in the organizational strategy of the body. What other sections you include in the
body is up to you. Put in only what is necessary. Here are a few other sections you might want to
consider:

Current Situation
Information necessary to understand the topic or focus of the literature review.

History
The chronological progression of the field, the literature, or an idea that is necessary to
understand the literature review, if the body of the literature review is not already a chronology.

Methods and/or Standards


The criteria you used to select the sources in your literature review or the way in which you
present your information. For instance, you might explain that your review includes only peer-
reviewed articles and journals.

Questions for Further Research


What questions about the field has the review sparked? How will you further your research as a
result of the review?

Begin Composition
28
Once you've settled on a general pattern of organization, you're ready to write each section.

Use evidence
A literature review in this sense is just like any other academic research paper. Your
interpretation of the available sources must be backed up with evidence to show that what you
are saying is valid.

Be selective
Select only the most important points in each source to highlight in the review. The type of
information you choose to mention should relate directly to the review's focus, whether it is
thematic, methodological, or chronological

Use quotes sparingly


The literature review does not allow for in-depth discussion or detailed quotes from the text.
Some short quotes here and there are okay, though, if you want to emphasize a point, or if what
the author said just cannot be rewritten in your own words.

Summarize and synthesize


Remember to summarize and synthesize your sources within each paragraph as well as
throughout the review.

Keep your own voice


While the literature review presents others' ideas, your voice (the writer's) should remain front
and center.

Use caution when paraphrasing

When paraphrasing a source that is not your own, be sure to represent the author's information or
opinions accurately and in your own words

Revise, revise, and revise

Draft in hand? Now you're ready to revise. Spending a lot of time revising is a wise idea, because
your main objective is to present the material, not the argument. So check over your review again
to make sure it follows the assignment and/or your outline. Then, just as you would for most
other academic forms of writing, rewrite or rework the language of your review so that you've
presented your information in the most concise manner possible. Be sure to use terminology
familiar to your audience; get rid of unnecessary jargon or slang. Finally, double check that
you've documented your sources and formatted the review appropriately for your discipline.

29
Tools of Data
Collection

Chapter – 06

30
Data can be classified as either primary or secondary.

Primary Data
Primary data mean original data that have been collected specially for the purpose in mind.

Secondary Data
Secondary data are data that have been collected for another purpose and where we will use
Statistical Method with the Primary Data. It means that after performing statistical operations on
Primary Data the results become known as Secondary Data

Primary Data Collection


In primary data collection, you collect the data yourself using methods such as interviews and
questionnaires. The key point here is that the data you collect is unique to you and your research
and, until you publish, no one else has access to it.
There are many methods of collecting primary data and the main methods include:
 questionnaires
 interviews
 focus group interviews
 observation
 case-studies
 diaries
 critical incidents
 Portfolios.

The primary data, which is generated by the above methods, may be qualitative in nature
(usually in the form of words) or quantitative (usually in the form of numbers or where you can
make counts of words used). We briefly outline these methods but you should also read around
the various methods.

Questionnaires

Questionnaires are a popular means of collecting data, but are difficult to design and often
require many rewrites before an acceptable questionnaire is produced.

Advantages:
 Can be used as a method in its own right or as a basis for interviewing or a telephone
survey.
 Can be posted, e-mailed or faxed.
 Can cover a large number of people or organisations.
 Wide geographic coverage.
 Relatively cheap.
 No prior arrangements are needed.
 Avoids embarrassment on the part of the respondent.
 Respondent can consider responses.
 Possible anonymity of respondent.
 No interviewer bias.
Disadvantages:
 Design problems.
31
 Questions have to be relatively simple.
 Historically low response rate (although inducements may help).
 Time delay whilst waiting for responses to be returned.
 Require a return deadline.
 Several reminders may be required.
 Assumes no literacy problems.
 No control over who completes it.
 Not possible to give assistance if required.
 Problems with incomplete questionnaires.
 Replies not spontaneous and independent of each other.
 Respondent can read all questions beforehand and then decide whether to complete or
not.

Interviews

Interviewing is a technique that is primarily used to gain an understanding of the underlying


reasons and motivations for people’s attitudes, preferences or behaviour. Interviews can be
undertaken on a personal one-to-one basis or in a group. They can be conducted at work, at
home, in the street or in a shopping centre, or some other agreed location.

Personal interview
Advantages:
 Serious approach by respondent resulting in accurate information.
 Good response rate.
 Completed and immediate.
 Possible in-depth questions.
 Interviewer in control and can give help if there is a problem.
 Can investigate motives and feelings.
 Can use recording equipment.
 Characteristics of respondent assessed – tone of voice, facial expression, hesitation, etc.
 Can use props.
 If one interviewer used, uniformity of approach.
 Used to pilot other methods.

Disadvantages:
 Need to set up interviews.
 Time consuming.
 Geographic limitations.
 Can be expensive.
 Normally need a set of questions.
 Respondent bias – tendency to please or impress, create false personal image, or end
interview quickly.
 Embarrassment possible if personal questions.
 Transcription and analysis can present problems – subjectivity.
 If many interviewers, training required.

Types of interview

32
Structured:
 Based on a carefully worded interview schedule.
 Frequently require short answers with the answers being ticked off.
 Useful when there are a lot of questions which are not particularly contentious or thought
provoking.
 Respondent may become irritated by having to give over-simplified answers.

Semi-structured:
 The interview is focused by asking certain questions but with scope for the respondent to
express him or herself at length.

Unstructured
 This also called an in-depth interview. The interviewer begins by asking a general
question. The interviewer then encourages the respondent to talk freely. The interviewer
uses an unstructured format, the subsequent direction of the interview being determined
by the respondent’s initial reply. The interviewer then probes for elaboration – ‘Why do
you say that?’ or, ‘That’s interesting, tell me more’ or, ‘Would you like to add anything
else?’ being typical probes.
 The following section is a step-by-step guide to conducting an interview. You should
remember that all situations are different and therefore you may need refinements to the
approach.

Planning an interview:

 List the areas in which you require information.


 Decide on type of interview.
 Transform areas into actual questions.
 Try them out on a friend or relative.
 Make an appointment with respondent(s) – discussing details of why and how long.
 Try and fix a venue and time when you will not be disturbed.

Conducting an interview:

Personally - arrive on time be smart smile employ good manners find a balance
between friendliness and objectivity
At the start - introduce yourself re-confirm the purpose assure confidentiality – if
relevant specify what will happen to the data.
The questions - speak slowly in a soft, yet audible tone of voice control your body
language knows the questions and topic ask all the questions.
Responses - recorded as you go on questionnaire written verbatim, but slow and time-
consuming summarised by you taped – agree beforehand – have
alternative method if not acceptable consider effect on respondent’s
answers proper equipment in good working order sufficient tapes and
batteries minimum of background noise.
At the end - ask if the respondent would like to give further details about anything or
any questions about the research thank them

Focus group interviews


33
A focus group is an interview conducted by a trained moderator in a non-structured and natural
manner with a small group of respondents. The moderator leads the discussion. The main
purpose of focus groups is to gain insights by listening to a group of people from the appropriate
target market talk about specific issues of interest

Observation

Observation involves recording the behavioural patterns of people, objects and events in a
systematic manner. Observational methods may be:

 structured or unstructured
 disguised or undisguised
 natural or contrived
 personal
 mechanical
 non-participant
 participant, with the participant taking a number of different roles.

Structured or unstructured
In structured observation, the researcher specifies in detail what is to be observed and how the
measurements are to be recorded. It is appropriate when the problem is clearly defined and the
information needed is specified.

In unstructured observation, the researcher monitors all aspects of the phenomenon that seem
relevant. It is appropriate when the problem has yet to be formulated precisely and flexibility is
needed in observation to identify key components of the problem and to develop hypotheses. The
potential for bias is high. Observation findings should be treated as hypotheses to be tested rather
than as conclusive findings.

Disguised or undisguised
In disguised observation, respondents are unaware they are being observed and thus behave
naturally. Disguise is achieved, for example, by hiding, or using hidden equipment or people
disguised as shoppers.

In undisguised observation, respondents are aware they are being observed. There is a danger of
the Hawthorne effect – people behave differently when being observed.

Natural or contrived
Natural observation involves observing behaviour as it takes place in the environment, for
example, eating hamburgers in a fast food outlet.

In contrived observation, the respondents’ behaviour is observed in an artificial environment, for


example, a food tasting session.

Personal

34
In personal observation, a researcher observes actual behaviour as it occurs. The observer may or
may not normally attempt to control or manipulate the phenomenon being observed. The
observer merely records what takes place.

Mechanical
Mechanical devices (video, closed circuit television) record what is being observed. These
devices may or may not require the respondent’s direct participation. They are used for
continuously recording on-going behaviour.

Non-participant
The observer does not normally question or communicate with the people being observed. He or
she does not participate.

Participant
In participant observation, the researcher becomes, or is, part of the group that is being
investigated. Participant observation has its roots in ethnographic studies (study of man and
races) where researchers would live in tribal villages, attempting to understand the customs and
practices of that culture. It has a very extensive literature, particularly in sociology (development,
nature and laws of human society) and anthropology (physiological and psychological study of
man). Organisations can be viewed as ‘tribes’ with their own customs and practices.
The role of the participant observer is not simple. There are different ways of classifying the
role:
 Researcher as employee.
 Researcher as an explicit role.
 Interrupted involvement.
 Observation alone.

Case-studies

The term case-study usually refers to a fairly intensive examination of a single unit such as a
person, a small group of people, or a single company. Case-studies involve measuring what is
there and how it got there. In this sense, it is historical. It can enable the researcher to explore,
unravel and understand problems, issues and relationships. It cannot, however, allow the
researcher to generalise, that is, to argue that from one case-study the results, findings or theory
developed apply to other similar case-studies. The case looked at may be unique and, therefore
not representative of other instances. It is, of course, possible to look at several case-studies to
represent certain features of management that we are interested in studying. The case-study
approach is often done to make practical improvements. Contributions to general knowledge are
incidental.
The case-study method has four steps:

 Determine the present situation.


 Gather background information about the past and key variables.
 Test hypotheses. The background information collected will have been analysed for
possible hypotheses. In this step, specific evidence about each hypothesis can be
gathered. This step aims to eliminate possibilities which conflict with the evidence
collected and to gain confidence for the important hypotheses. The culmination of this

35
step might be the development of an experimental design to test out more rigorously the
hypotheses developed, or it might be to take action to remedy the problem.
 Take remedial action. The aim is to check that the hypotheses tested actually work out in
practice. Some action, correction or improvement is made and a re-check carried out on
the situation to see what effect the change has brought about.

The case-study enables rich information to be gathered from which potentially useful hypotheses
can be generated. It can be a time-consuming process. It is also inefficient in researching
situations which are already well structured and where the important variables have been
identified. They lack utility when attempting to reach rigorous conclusions or determining
precise relationships between variables.

Diaries

A diary is a way of gathering information about the way individuals spend their time on
professional activities. They are not about records of engagements or personal journals of
thought! Diaries can record either quantitative or qualitative data, and in management research
can provide information about work patterns and activities.

Advantages:

 Useful for collecting information from employees.


 Different writers compared and contrasted simultaneously.
 Allows the researcher freedom to move from one organisation to another.
 Researcher not personally involved.
 Diaries can be used as a preliminary or basis for intensive interviewing.
 Used as an alternative to direct observation or where resources are limited.

Disadvantages:

 Subjects need to be clear about what they are being asked to do, why and what you plan
to do with the data.
 Diarists need to be of a certain educational level.
 Some structure is necessary to give the diarist focus, for example, a list of headings.
 Encouragement and reassurance are needed as completing a diary is time-consuming and
can be irritating after a while.
 Progress needs checking from time-to-time.
 Confidentiality is required as content may be critical.
 Analyses problems, so you need to consider how responses will be coded before the
subjects start filling in diaries.

Critical incidents

The critical incident technique is an attempt to identify the more ‘noteworthy’ aspects of job
behaviour and is based on the assumption that jobs are composed of critical and non-critical
tasks. For example, a critical task might be defined as one that makes the difference between
success and failure in carrying out important parts of the job. The idea is to collect reports about

36
what people do that is particularly effective in contributing to good performance. The incidents
are scaled in order of difficulty, frequency and importance to the job as a whole.

The technique scores over the use of diaries as it is centred on specific happenings and on what is
judged as effective behaviour. However, it is laborious and does not lend itself to objective
quantification.

Portfolios

A measure of a manager’s ability may be expressed in terms of the number and duration of
‘issues’ or problems being tackled at any one time. The compilation of problem portfolios is
recording information about how each problem arose, methods used to solve it, difficulties
encountered, etc. This analysis also raises questions about the person’s use of time. What
proportion of time is occupied in checking; in handling problems given by others; on self-
generated problems; on ‘top-priority’ problems; on minor issues, etc? The main problem with
this method and the use of diaries is getting people to agree to record everything in sufficient
detail for you to analyse. It is very time-consuming!

Secondary Data Collection

All methods of data collection can supply quantitative data (numbers, statistics or financial) or
qualitative data (usually words or text). Quantitative data may often be presented in tabular or
graphical form. Secondary data is data that has already been collected by someone else for a
different purpose to yours. For example, this could mean using.

 data supplied by a marketing organisation


 annual company reports
 government statistics

Secondary data can be used in different ways:

 You can simply report the data in its original format. If so, then it is most likely that the
place for this data will be in your main introduction or literature review as support or
evidence for your argument.
 You can do something with the data. If you use it (analyse it or re-interpret it) for a
different purpose to the original then the most likely place would be in the ‘Analysis of
findings’ section of your dissertation.
As secondary data has been collected for a different purpose to yours, you should treat it with
care. The basic questions you should ask are:

 Where has the data come from?


 Does it cover the correct geographical location?
 Is it current (not too out of date)?
 If you are going to combine with other data are the data the same (for example, units,
time, etc.)?
 If you are going to compare with other data are you comparing like with like?

Thus you should make a detailed examination of the following:


37
 Title (for example, the time period that the data refers to and the geographical coverage).
 Units of the data.
 Source (some secondary data is already secondary data).
 Column and row headings, if presented in tabular form

There are many sources of data and most people tend to underestimate the number of sources and
the amount of data within each of these sources.

Sources can be classified as:

Paper-based sources – books, journals, periodicals, abstracts, indexes, directories, research


reports, conference papers, market reports, annual reports, internal records of organisations,
newspapers and magazines

Electronic sources– CD-ROMs, on-line databases, Internet, videos and broadcasts.


The main sources of qualitative and quantitative secondary data include the follwing:

Official or government sources –


Census Reports
SRS – Vital Statistics
Reports of States, Country and Municipal Health Departments
Report of Police Department, prisons, jails, courts, probation department
Report of National Sample Survey Department
Reports of State Domestic Products
Reports of Public Welfare Department
Report of State Board of Education, etc…

Unofficial or general business sources


Report of Council of social agencies
International sources

Coding Qualitative Data


Description: Coding—using labels to classify and assign meaning to pieces of information—
helps you to make sense of qualitative data, such as responses to open-ended survey questions.
Codes answer the questions, “What do I see going on here?” or “How do I categorize the
information?” Coding enables you to organize large amounts of text and to discover patterns that
would be difficult to detect by reading alone.
Coding Steps:
1. Initial coding. It’s usually best to start by generating numerous codes as you read through
responses, identifying data that are related without worrying about the variety of categories.
Because codes are not always mutually exclusive, a piece of information might be assigned
several codes.
2. Focused coding After initial coding, it is helpful to review codes and eliminate less useful
ones, combine smaller categories into larger ones, or if a very large number of responses have
been assigned the same code, subdivide that category. At this stage you should see repeating
ideas and can begin organizing codes into larger themes that connect different codes. It may help
to spread responses across a floor or large table when trying to identify themes.
38
Sampling

Chapter - 07

39
What is a sample?

A sample is a finite part of a statistical population whose properties are studied to gain
information about the whole. When dealing with people, it can be defined as a set of respondents
(people) selected from a larger population for the purpose of a survey.

A population is a group of individual’s persons, objects, or items from which samples are taken
for measurement for example a population of presidents or professors, books or students.

What is sampling?

Sampling is the act, process, or technique of selecting a suitable sample, or a representative part
of a population for the purpose of determining parameters or characteristics of the whole
population.

What is the purpose of sampling?

To draw conclusions about populations from samples, we must use inferential statistics which
enables us to determine a population’s characteristics by directly observing only a portion (or
sample) of the population. We obtain a sample rather than a complete enumeration (a census) of
the population for many reasons. Obviously, it is cheaper to observe a part rather than the whole,
but we should prepare ourselves to cope with the dangers of using samples. In this tutorial, we
will investigate various kinds of sampling procedures. Some are better than others but all may
yield samples that are inaccurate and unreliable. We will learn how to minimize these dangers,
but some potential error is the price we must pay for the convenience and savings the samples
provide.

There would be no need for statistical theory if a census rather than a sample was always used to
obtain information about populations. But a census may not be practical and is almost never
economical. There are six main reasons for sampling instead of doing a census.

These are; -Economy -Timeliness -The large size of many populations -Inaccessibility of some
of the population -Destructiveness of the observation -accuracy

The economic advantage of using a sample in research Obviously, taking a sample requires
fewer resources than a census

The time factor


A sample may provide you with needed information quickly. For example, you are a Doctor and
a disease has broken out in a village within your area of jurisdiction, the disease is contagious
and it is killing within hours nobody knows what it is. You are required to conduct quick tests to
help save the situation. If you try a census of those affected, they may be long dead when you
arrive with your results. In such a case just a few of those already infected could be used to
provide the required information.

The very large populations


Many populations about which inferences must be made are quite large.

40
The partly accessible populations

There are some populations that are so difficult to get access to that only a sample can be used.
Like people in prison, like crashed aero planes in the deep seas, presidents etc. The
inaccessibility may be economic or time related. Like a particular study population may be so
costly to reach like the population of planets that only a sample can be used. In other cases, a
population of some events may be taking too long to occur that only sample information can be
relied on.

The destructive nature of the observation sometimes the very act of observing the desired
characteristic
Accuracy and sampling A sample may be more accurate than a census. A sloppily conducted
census can provide less reliable information than a carefully obtained sample.

Bias and error in sampling


A sample is expected to mirror the population from which it comes, however, there is no
guarantee that any sample will be precisely representative of the population from which it comes.

Chance may dictate that a disproportionate number of untypical observations will be made like
for the case of testing fuses, the sample of fuses may consist of more or less faulty fuses than the
real population proportion of faulty cases. In practice, it is rarely known when a sample is
unrepresentative and should be discarded.

Sampling error
What can make a sample unrepresentative of its population? One of the most frequent causes is
sampling error.
Sampling error comprises the differences between the sample and the population that are due
solely to the particular units that happen to have been selected.
The more dangerous error is the less obvious sampling error against which nature offers very
little protection.
There are two basic causes for sampling error.
One is chance: That is the error that occurs just because of bad luck. This may result in untypical
choices. Unusual units in a population do exist and there is always a possibility that an
abnormally large number of them will be chosen.
The second cause of sampling is sampling bias.
Sampling bias is a tendency to favour the selection of units that have paticular characteristics.
Sampling bias is usually the result of a poor sampling plan. The most notable is the bias of non
response when for some reason some units have no chance of appearing in the sample

Non sampling error (measurement error)

The other main cause of unrepresentative samples is non sampling error. This type of error can
occur whether a census or a sample is being used. Like sampling error, non sampling error may
either be produced by participants in the statistical study or be an innocent by product of the
sampling plans and procedures.
A non sampling error is an error that results solely from the manner in which the observations
are made.

41
Selecting the sample

The preceding section has covered the most common problems associated with statistical studies.
The desirability of a sampling procedure depends on both its vulnerability to error and its cost.
However, economy and reliability are competing ends, because, to reduce error often requires an
increased expenditure of resources. Of the two types of statistical errors, only sampling error can
be controlled by exercising care in determining the method for choosing the sample. The
previous section has shown that sampling error may be due to either bias or chance. The chance
component (sometimes called random error) exists no matter how carefully the selection
procedures are implemented, and the only way to minimize chance sampling errors is to select a
sufficiently large sample (sample size is discussed towards the end of this tutorial). Sampling
bias on the other hand may be minimized by the wise choice of a sampling procedure.

Types of samples

There are three primary kinds of samples: the convenience, the judgement sample, and the
random sample. They differ in the manner in which the elementary units are chosen.
The convenient sample
A convenience sample results when the more convenient elementary units are chosen from a
population for observation.
The judgement sample
A judgement sample is obtained according to the discretion of someone who is familiar with the
relevant characteristics of the population.
The random sample
This may be the most important type of sample. A random sample allows a known probability
that each elementary unit will be chosen. For this reason, it is sometimes referred to as a
probability sample. This is the type of sampling that is used in lotteries and raffles.

Types of random samples


 A simple random sample
A simple random sample is obtained by choosing elementary units in search a way that each unit
in the population has an equal chance of being selected. A simple random sample is free from
sampling bias. However, using a random number table to choose the elementary units can be
cumbersome. If the sample is to be collected by a person untrained in statistics, then instructions
may be misinterpreted and selections may be made improperly. Instead of using a least of
random numbers, data collection can be simplified by selecting say every 10th or 100th unit after
the first unit has been chosen randomly as discussed below. such a procedure is called systematic
random sampling.
 A systematic random sample
A systematic random sample is obtained by selecting one unit on a random basis and choosing
additional elementary units at evenly spaced intervals until the desired number of units is
obtained.
 A stratified sample
A stratified sample is obtained by independently selecting a separate simple random sample from
each population stratum. A population can be divided into different groups may be based on
some characteristic or variable like income of education. Like any body with ten years of
education will be in group A, between 10 and 20 group B and between 20 and 30 group C. These
groups are referred to as strata. You can then randomly select from each stratum a given number
42
of units which may be based on proportion like if group A has 100 persons while group B has 50,
and C has 30 you may decide you will take 10% of each. So you end up with 10 from group A, 5
from group B and 3 from group C.
 A cluster sample
A cluster sample is obtained by selecting clusters from the population on the basis of simple
random sampling. The sample comprises a census of each random cluster selected.

Purposeful sampling

Purposeful sampling selects information rich cases for in-depth study. Size and specific cases
depend on the study purpose.
There are about 16 different types of purposeful sampling. They are briefly described below for
you to be aware of them.
Extreme and deviant case sampling This involves learning from highly unusual manifestations
of the phenomenon of interest, such as outstanding successes, notable failures, top of the class,
dropouts, exotic events, crises.
Intensity sampling This is information rich cases that manifest the phenomenon intensely, but
not extremely, such as good students, poor students, above average/below average.
Maximum variation sampling This involves purposefully picking a wide range of variation on
dimensions of interest. This documents unique or diverse variations that have emerged in
adapting to different conditions. It also identifies important common patterns that cut across
variations.
Homogeneous sampling This one reduces variation, simplifies analysis, facilitates group
interviewing. Like instead of having the maximum number of nationalities as in the above case
of maximum variation, it may focus on one nationality say Americans only.
Typical case sampling It involves taking a sample of what one would call typical, normal or
average for a particular phenomenon,
Stratified purposeful sampling This illustrates characteristics of particular subgroups of interest
and facilitates comparisons between the different groups.
Critical case sampling> this permits logical generalization and maximum application of
information to other cases like "If it is true for this one case, it is likely to be true of all other
cases. You must have heard statements like if it happened to so and so then it can happen to
anybody. Or if so and so passed that exam, then anybody can pass.
Snowball or chain sampling This particular one identifies, cases of interest from people who
know people who know what cases are information rich, that is good examples for study, good
interview subjects. This is commonly used in studies that may be looking at issues like the
homeless households. What you do is to get hold of one and he/she will tell you where the others
are or can be found. When you find those others they will tell you where you can get more others
and the chain continues.
Criterion sampling Here, you set a criteria and pick all cases that meet that criteria for example,
all ladies six feet tall, all white cars, all farmers that have planted onions. This method of
sampling is very strong in quality assurance.
Theory based or operational construct sampling. Finding manifestations of a theoretical
construct of interest so as to elaborate and examine the construct.
Confirming and disconfirming cases Elaborating and deepening initial analysis like if you had
already started some study, you are seeking further information or confirming some emerging
issues which are not clear, seeking exceptions and testing variation.

43
Opportunistic Sampling This involves following new leads during field work, taking advantage
of the unexpected flexibility.
Random purposeful sampling This adds credibility when the purposeful sample is larger than
one can handle. Reduces judgement within a purposeful category. But it is not for generalizations
or representativeness.
Sampling politically important cases This type of sampling attracts or avoids attracting
attention undesired attention by purposisefully eliminating from the sample political cases. These
may be individuals, or localities.
Convenience sampling It is useful in getting general ideas about the phenomenon of interest.
Combination or mixed purposeful sampling This combines various sampling strategies to
achieve the desired sample. This helps in triangulation, allows for flexibility, and meets multiple
interests and needs. When selecting a sampling strategy it is necessary that it fits the purpose of
the study, the resources available, the question being asked and the constraints being faced. This
holds true for sampling strategy as well as sample size.

Sample size

Before deciding how large a sample should be, you have to define your study population. The
question of how large a sample should be is a difficult one. Sample size can be determined by
various constraints.. This constraint influences the sample size as well as sample design and data
collection procedures.

In general, sample size depends on the nature of the analysis to be performed, the desired
precision of the estimates one wishes to achieve, the kind and number of comparisons that will
be made, the number of variables that have to be examined simultaneously and how
heterogenous a universe is sampled.

In non-experimental research, most often, relevant variables have to be controlled statistically


because groups differ by factors other than chance.

More technical considerations suggest that the required sample size is a function of the precision
of the estimates one wishes to achieve, the variability or variance, one expects to find in the
population and the statistical level of confidence one wishes to use.

44
Reporting and
Presentation of
Research Findings

Chapter – 08

45
Reporting Survey Results

When your survey and analysis has been completed, the final step in the survey process is to
present your findings, which involves the creation of a research report. This report should
include a background of why you conducted the survey, a breakdown of the results, and
conclusions and recommendations supported by this material. This is one of the most important
aspects of your survey research as it is the key in communicating your findings to those who can
make decisions to take action on those results.

Provide a background

Before you start working on the details of your report, you need to explain the general
background of your survey research. If you will be presenting the findings to your audience (the
decision-makers), you will need to make the basis for your research clear, including what
objectives were established, and the conclusions drawn from your findings.

Introduction to the survey research

List the factors that motivated you to conduct this research in the first place. By stating the
reasons behind the research, your audience will have a better understanding of why the survey
was conducted and the importance of the findings.

Identify research objectives

Itemize the goals and objectives you set out to achieve. Before you constructed your survey, you
had a plan as to the information you needed to get from your respondents. Once you had those
goals in mind, your survey questions were chosen. Did your respondent's answers give you the
information you sought after when you designed the survey? Make a list of the objectives you set
out when you started, those objectives that were met and those that were not, and any other
information relating to the planning process.

Explain the data collection process

Specify how your data was captured. For the purposes of this article, we are referring to a survey
for collecting the data. But be specific as what type of survey you used - online, telephone, or
paper-based. Also consider who and how many it was sent to, and how the analysis was
conducted.

Describe your findings

Explain findings discovered in your research, especially facts that were important, unusual, or
surprising. Briefly highlight some of the key points that were uncovered in your results. More
detail will be revealed later in the presentation.

Finalize your thoughts and make recommendations

Summarize findings in concise statements so that an action plan can be created. Your
conclusions and recommendations should be based on the data that you have gathered. It is from
46
these final statements that management will make their decisions on how to take action on a
given situation.

Structure your report

The background information of your survey research may need to be fine-tuned into a structured
report format for a polished presentation. Survey research reports typically have the following
components: title page, table of contents, executive summary, methodology, findings, survey
conclusions, and recommendations.

Title Page

State the focus of your research. The title should what the report is about, for example,
"Customer Satisfaction in the Indian Market." Also include the names of who prepared the
report, to whom it will be presented, and the date the report is to be presented.

Table of Contents

List the sections in your report. Here is where you give a high-level overview of the topics to be
discussed, in the order they are presented in the report. Depending on the length of your report,
you should consider including a listing of all charts and graphs so that your audience can quickly
locate them.

Executive Summary

Summarize the major findings up front. Listed at the beginning of your report, this short list of
survey findings, conclusions, and recommendations is helpful. The key word here is "short" so
no more than a few complete sentences, which may be bulleted if you wish. This summary can
also be used as a reference when your reader is finished the report and wants to just glance over
the major points.

Methodology

Describe how you got your data. Whether you conducted an online, paper or telephone survey, or
perhaps you talked to people face to face, make sure you list how your research was conducted.
Also make note of how many people participated, response rates, and the time it took to conduct
this research.

Findings

Present your research results in detail. You want to be detailed with this section of the report.
Display your results in the form of tables, charts and graphs, and incorporate descriptive text to
explain what these visuals mean, and to emphasize important points

 Survey Conclusions

Summarize the key points. This concise collection of findings is similar to the Executive
Summary. These conclusions should be strong statements that establish a relationship between
47
the data and the visuals. Remember that thoughts expressed here must be supported by data. You
may also mention anything that may be related to this survey research, such as previous studies
or survey results that may prove useful if included.

 Recommendations

Suggest a course of action. Based on your conclusions, make suggestions at a high-level, as to


what actions could be taken to help the survey project meet the research objectives. For example,
if you concluded that customers are not satisfied with customer service from the support staff,
you may recommend that management should monitor support staff calls to assure quality
customer service standards are met.

Making a recommendation doesn't necessarily mean that action is going to take place, but it
provides management with a baseline from which to make their decisions.

 Presentation Media

In years past, formal reporting of research results was presented as a printed report, often very
long and difficult to comprehend. In recent years, the convenience of word processing,
spreadsheet and presentation software has streamlined and condensed this process dramatically.
Presentation software, like Microsoft PowerPoint, has become a standard in most industries.
Used in a slide format, it can display your survey results in an organized manner

To give an idea of how efficient this presentation media is, here's an example. A traditional text
heavy report may contain 100 pages of text and graphics, and wading through this material can
be time-consuming and hard to understand. However when organized in a slide format, this
massive amount of information can be reduced down to a 20-50 page slide presentation, with
concise bullet points and compelling visuals. Follow the tips below to streamline the look and
feel of your presentation.

Tips on formatting your slides:

Experiment with type styles, sizes, and colors. Don't be afraid to bold text, underline or
italicize if you are trying to emphasize a point.
Keep titles short. About 5-7 words will get your point across.
Make good use of the space available in the slide. Enlarge the graphs and have the text large
enough that it is easy to read across a room.
Format your slides horizontally (landscape) and not vertically (portrait). You don't want
part of your slide to be below eye level.
Don't put too much data on one slide. One idea per slide is ideal. If you have many graphs and
data in one place, the audience may lose interest. In addition, the increased amount of text will
most likely require you reduce your font size, which will make it harder to read from a distance.
Two graphs maximum per slide. This will make your data easier to understand. If you must
have two visuals, make sure the text accompanying it is simple.
Avoid using busy slide backgrounds. Multiple colors or gradients can make text hard to read.

Conclusion

48
Communicate your survey results effectively to your audience with a survey research report.
Organize your survey findings with background information, detailed data and results,
conclusions and recommendations.

Presentation of Research Result

The presentation and reporting of results is one of the most important things we do as specialists
and professionals. The opportunities to present ourselves and our ideas are often constrained by
time, journal space or the attention of the audience. A successful delivery of your message
depends on optimal use of a very small window of opportunity and yet this small aperture may
be the only visible outlet for hundreds of hours of personal effort and years of painstaking
research. Crucially, the likelihood of further windows in the future relies on how well you use
your opportunity.

Starting your presentation

First of all work out what you want to say: what single message do I want my audience to take
away? Start with pen and paper and summarize your take-away message in one simple sentence.
The rest of your presentation or article should then be constructed to lead to this point whilst
providing the necessary background the audience needs to understand the take-away message
and its constraints.

Having determined your takeaway message, write down bullet points leading to it. The order
should not matter at this stage as it should be sufficient simply to get headings or concise ideas
down on paper. After the main points are there, return through them, select those that are
essential and then place them in their best order, the order that will lead your audience's attention
in a logical fashion.

Presentation method

Oral presentations can be made using a variety of presentation aids. You may choose to present
without aids at all, as many lawyers do, but you risk diluting your message considerably and
losing your audience in the process. It is well-recognized that an audience has maximum
information absorption and retention when that information is both visible and audible. A good
presenter uses projected graphics as an aide memories, expanding on points as they appear to the
audience. Use the bullet points on your slides to do this whilst adding further information and
explaining charts and diagrams. In this way you can prompt yourself without the need for
reading notes. Bear this in mind when working out the bullet points for your presentation
beforehand. If you put everything down on the slide and then read it out you will lose vital eye
contact and rapport with your audience and they will wonder why you did not simply give them
a handout.

Visual aids

Visual aids come in many different forms: black or white boards, handouts, paper flip charts,
overhead transparencies, colour slides, video or computer presentations. The last method is the
most recent and in my opinion the best and most versatile. Having prepared your presentation
using a computer software package, the computer outputs to a video-type projector which
49
effectively displays the computer screen as a slide. For text, charts and graphs the end result is
indistinguishable from colour slides which have been the gold standard for quality in the past.
Not all conferences or lecture halls have this facility yet and you should check beforehand if
intending to use it.

Visual data

When presenting visual data it is worth respecting a number of rules which will prevent
audience-fatigue.
 Letter dimensions: no lettering less than six per cent of the longest dimension of the slide
and no letter height more than six times its thickness
 Data content: no more than six times two items of information
 Number of slides/OHPs: not more than six in each quarter of an hour
 Colour: six per cent of your audience is colour blind, use colour carefully
 Visibility: all visual materials legible at six times their diagonal dimensions (slides should
be legible at 25 cm and overheads at 200 cm)
 Text size: Six points is the optimum height for slides

If you use computer software to prepare your presentation, the package will ensure that you
adhere to most of these rules by default. If you try to contravene them the package usually lets
you know or you start to make life very difficult for yourself.

Charts and diagrams


Choose the most appropriate chart for the information you have to present. If you are not sure
and you are using a computer presentation package it is often possible to flick between different
types of chart with the same data and then you can actually see which one looks the most
effective. Beware of non-data ink; three-dimensional effects are often spectacular but do they
add to or complicate the end result? They may make it difficult for your audience to extract the
message. A picture may say a thousand words but if it does, ensure there is sufficient time for
your audience to interpret it.

Make sure your axes are correctly and reasonably labelled. They should be clear and not
misleading although they should be 'manipulated' to show a result or effect to its best. Once
again, computer packages allow you to experiment with axes to obtain the look you wish to
achieve. You should keep graphs and charts simple: the best graphs contain only those points or
values that are necessary to show the effect or difference; don't distract from the point by
providing unnecessary visual information. An example is splitting a pie into ten segments when
the six smallest can be grouped and labelled as 'Other' and then these explained if necessary.

Computer presentations

The advantages of computer presentations are many: low cost per presentation, very high quality,
versatility and flexibility. Once done, the presentation can be modified, or updated very easily. It
is also incredibly easy to learn and use and a computer illiterate could use the technique to create
a presentation with less than a day's training. Packages such as Microsoft PowerPoint incorporate
features that make creating professional presentations very easy and quick indeed.

50
If you can use the computer to make the presentation, these software packages allow bullet
points to appear one at a time (far better than the common technique of using a piece of paper on
top of an overhead to reveal it bit by bit), slides can fade off or on the screen with a large variety
of effects, the presenter can write comments or circle points on the screen and can flick
backwards and forwards through the presentation at the click of a mouse button. It is also
possible to incorporate scanned photographs or moving video images within the presentation
although this may require you to seek help from your computer department. The latest packages
even allow applause after each slide (if your computer has a speaker) although this might suggest
a greater deal of success than you have actually achieved!

51
Specific Application

Chapter – 09

52
Baseline Studies

Baseline research crops up in conversations more often in its absence than in its presence.
Human nature being what it is, we are more often well into a programme before someone thinks
about evaluating it and there is writing of hand over the lack of a baseline study to compare with,
to see whether there have been changes. The evaluation of development programme would be
much more effective and convincing if baseline studies were more often undertaken.

Baseline research should take place before a programme starts. It is aimed at understanding the
nature and extent of a problem so that planning can be informed by this information. It would
also look at any existing legislation, policies and interventions which are relevant to the problem.
At this point it may overlap with a stakeholder. And a baseline study aims to provide a reference
point against which changes can be measured so that the impact of the programme or of external
factor.

It should be noted that baseline research should not be a sort of census process! Spending
obvious needs to be in proportion to the benefit to be gained from the study. But at a minimum
the collection of existing research data, a systematic look at what is at present being done to
address the problem and some small-scale fieldwork to get basic information about how the issue
is experienced in the community, should be undertaken. For large and important programmes,
which are seen as demonstration projects whose learning should inform practice very widely, a
more rigorous study will be needed. Consider what are the key factors which planned
intervention should change – what can be used as observable, quantifiable indicators for these?
Logical framework analysis refers to these as ‘verifiable indicators’.

There can be a ‘chicken and egg’ problem – how to finance a baseline study before spending the
resources on a baseline study to show that it is needed? This is probably why these studies are
relatively rare, but funders are becoming increasingly interested in evidence of effectiveness. A
good balance between cost and benefit/learning is needed.

Critical Questions in baseline assessment

 What kind of issue or problem is giving us cause for concern?


 How serious or widespread is this problem?
 What are its exact dimensions?
 What are the existing policies and programmes that affect these community members?

A range of research technique may be used in baseline research, but if the aim is to create a basis
for a later evaluation process, it will be important to include indicators which can be used in a
quantitative way to assess the success of the programme.

Need assessment

A systematic set of procedures undertaken for the purpose of setting priorities and making
decisions about program or organizational improvement and allocation of resources. The
priorities are based upon identified needs.

53
A tool for determining valid and useful problems which are philosophically as well as
practically sound

Why Need Assessment?

• Obtain valid and reliable information


• to build a case for funding (example)
• sets criteria for determine how best to allocate resources
• to get buy-in from the stakeholders
• Regulations or laws mandates
• Resource allocation and decision-making
• assessing the needs of specific underserved subpopulation
• As a part of program evaluation

Phases of Need Assessment

• Preassessment - Exploration
• Assessment -Data Gathering
• Postassessment- Utilization

What is Action Research?

Definition

Action research is known by many other names, including participatory research, collaborative
inquiry, emancipatory research, action learning, and contextural action research, but all are
variations on a theme. Put simply, action research is “learning by doing” - a group of people
identify a problem, do something to resolve it, see how successful their efforts were, and if not
satisfied, try again. While this is the essence of the approach, there are other key attributes of
action research that differentiate it from common problem-solving activities that we all engage in
every day. A more succinct definition is,

"Action research...aims to contribute both to the practical concerns of people in an immediate


problematic situation and to further the goals of social science simultaneously. Thus, there is a
dual commitment in action research to study a system and concurrently to collaborate with
members of the system in changing it in what is together regarded as a desirable direction.
Accomplishing this twin goal requires the active collaboration of researcher and client, and thus
it stresses the importance of co-learning as a primary aspect of the research process."i[i]

What separates this type of research from general professional practices, consulting, or daily
problem-solving is the emphasis on scientific study, which is to say the researcher studies the
problem systematically and ensures the intervention is informed by theoretical considerations.
Much of the researcher’s time is spent on refining the methodological tools to suit the exigencies
of the situation, and on collecting, analyzing, and presenting data on an ongoing, cyclical basis.

Several attributes separate action research from other types of research. Primary is its focus on
turning the people involved into researchers, too - people learn best, and more willingly apply
what they have learned, when they do it themselves. It also has a social dimension - the research
54
takes place in real-world situations, and aims to solve real problems. Finally, the initiating
researcher, unlike in other disciplines, makes no attempt to remain objective, but openly
acknowledges their bias to the other participants.

The Action Research Process

Stephen Kemmis has developed a simple model of the cyclical nature of the typical action
research process (Figure 1). Each cycle has four steps: plan, act, observe, reflect.

Simple Action Research Model (from MacIsaac, 1995)

Gerald Susman (1983) gives a somewhat more elaborate listing. He distinguishes five phases to
be conducted within each research cycle (Figure 2). Initially, a problem is identified and data is
collected for a more detailed diagnosis. This is followed by a collective postulation of several
possible solutions, from which a single plan of action emerges and is implemented. Data on the
results of the intervention are collected and analyzed, and the findings are interpreted in light of
how successful the action has been. At this point, the problem is re-assessed and the process
begins another cycle. This process continues until the problem is resolved.

55
DIAGNOSING
Indentfying or
defining a problem

ACTION
SPECIFYING PLANNING
LEARNING
Considering
Indentifying general alternative courses
findings of action

EVALUATING TAKING ACTION


Studying the Selecting a course
consequences of an of action
action

Detailed Action Research Model (adapted from Susman 1983)

Principles of Action Research

What gives action research its unique flavour is the set of principles that guide the research.
Winter (1989) provides a comprehensive overview of six key principles.

Reflexive critique

An account of a situation, such as notes, transcripts or official documents, will make implicit
claims to be authoritative, i.e., it implies that it is factual and true. Truth in a social setting,
however, is relative to the teller. The principle of reflective critique ensures people reflect on
issues and processes and make explicit the interpretations, biases, assumptions and concerns
upon which judgments are made. In this way, practical accounts can give rise to theoretical
considerations.

Dialectical critique

Reality, particularly social reality, is consensually validated, which is to say it is shared through
language. Phenomena are conceptualized in dialogue, therefore a dialectical critique is required
to understand the set of relationships both between the phenomenon and its context, and between
the elements constituting the phenomenon. The key elements to focus attention on are those
constituent elements that are unstable, or in opposition to one another. These are the ones that
are most likely to create changes.

56
Collaborative Resource

Participants in an action research project are co-researchers. The principle of collaborative


resource presupposes that each person’s ideas are equally significant as potential resources for
creating interpretive categories of analysis, negotiated among the participants. It strives to avoid
the skewing of credibility stemming from the prior status of an idea-holder. It especially makes
possible the insights gleaned from noting the contradictions both between many viewpoints and
within a single viewpoint

Risk

The change process potentially threatens all previously established ways of doing things, thus
creating psychic fears among the practitioners. One of the more prominent fears comes from the
risk to ego stemming from open discussion of one’s interpretations, ideas, and judgments.
Initiators of action research will use this principle to allay others’ fears and invite participation
by pointing out that they, too, will be subject to the same process, and that whatever the
outcome, learning will take place.

Plural Structure

The nature of the research embodies a multiplicity of views, commentaries and critiques, leading
to multiple possible actions and interpretations. This plural structure of inquiry requires a plural
text for reporting. This means that there will be many accounts made explicit, with
commentaries on their contradictions, and a range of options for action presented. A report,
therefore, acts as a support for ongoing discussion among collaborators, rather than a final
conclusion of fact.

Theory, Practice, Transformation

For action researchers, theory informs practice, practice refines theory, in a continuous
transformation. In any setting, people’s actions are based on implicitly held assumptions,
theories and hypotheses, and with every observed result, theoretical knowledge is enhanced. The
two are intertwined aspects of a single change process. It is up to the researchers to make
explicit the theoretical justifications for the actions, and to question the bases of those
justifications. The ensuing practical applications that follow are subjected to further analysis, in
a transformative cycle that continuously alternates emphasis between theory and practice.

When is Action Research used?

Action research is used in real situations, rather than in contrived, experimental studies, since its
primary focus is on solving real problems. It can, however, be used by social scientists for
preliminary or pilot research, especially when the situation is too ambiguous to frame a precise
research question. Mostly, though, in accordance with its principles, it is chosen when
circumstances require flexibility, the involvement of the people in the research, or change must
take place quickly or holistically.

It is often the case that those who apply this approach are practitioners who wish to improve
understanding of their practice, social change activists trying to mount an action campaign, or,
57
more likely, academics who have been invited into an organization (or other domain) by
decision-makers aware of a problem requiring action research, but lacking the requisite
methodological knowledge to deal with it.

Situating Action Research in a Research Paradigm

Positivist Paradigm

The main research paradigm for the past several centuries has been that of Logical Positivism.
This paradigm is based on a number of principles, including: a belief in an objective reality,
knowledge of which is only gained from sense data that can be directly experienced and verified
between independent observers. Phenomena are subject to natural laws that humans discover in
a logical manner through empirical testing, using inductive and deductive hypotheses derived
from a body of scientific theory. Its methods rely heavily on quantitative measures, with
relationships among variables commonly shown by mathematical means. Positivism, used in
scientific and applied research, has been considered by many to be the antithesis of the principles
of action research (Susman and Evered 1978, Winter 1989).

Interpretive Paradigm

Over the last half century, a new research paradigm has emerged in the social sciences to break
out of the constraints imposed by positivism. With its emphasis on the relationship between
socially-engendered concept formation and language, it can be referred to as the Interpretive
paradigm. Containing such qualitative methodological approaches as phenomenology,
ethnography, and hermeneutics, it is characterized by a belief in a socially constructed,
subjectively-based reality, one that is influenced by culture and history. Nonetheless it still
retains the ideals of researcher objectivity, and researcher as passive collector and expert
interpreter of data.

Paradigm of Praxis

Though sharing a number of perspectives with the interpretive paradigm, and making
considerable use of its related qualitative methodologies, there are some researchers who feel
that neither it nor the positivist paradigms are sufficient epistemological structures under which
to place action research (Lather 1986, Morley 1991). Rather, a paradigm of Praxis is seen as
where the main affinities lie. Praxis, a term used by Aristotle, is the art of acting upon the
conditions one faces in order to change them. It deals with the disciplines and activities
predominant in the ethical and political lives of people. Aristotle contrasted this with Theoria -
those sciences and activities that are concerned with knowing for its own sake. Both are equally
needed he thought. That knowledge is derived from practice, and practice informed by
knowledge, in an ongoing process, is a cornerstone of action research. Action researchers also
reject the notion of researcher neutrality, understanding that the most active researcher is often
one who has most at stake in resolving a problematic situation.

Current Types of Action Research

By the mid-1970s, the field had evolved, revealing 4 main ‘streams’ that had emerged:
traditional, contextural (action learning), radical, and educational action research.
58
Traditional Action Research

Traditional Action Research stemmed from Lewin’s work within organizations and encompasses
the concepts and practices of Field Theory, Group Dynamics, T-Groups, and the Clinical Model.
The growing importance of labour-management relations led to the application of action research
in the areas of Organization Development, Quality of Working Life (QWL), Socio-technical
systems (e.g., Information Systems), and Organizational Democracy. This traditional approach
tends toward the conservative, generally maintaining the status quo with regards to
organizational power structures.

Contextural Action Research (Action Learning)

Contextural Action Research, also sometimes referred to as Action Learning, is an approach


derived from Trist’s work on relations between organizations. It is contextural, insofar as it
entails reconstituting the structural relations among actors in a social environment; domain-
based, in that it tries to involve all affected parties and stakeholders; holographic, as each
participant understands the working of the whole; and it stresses that participants act as project
designers and co-researchers. The concept of organizational ecology, and the use of search
conferences come out of contextural action research, which is more of a liberal philosophy, with
social transformation occurring by consensus and normative incrementalism.

Radical Action Research

The Radical stream, which has its roots in Marxian ‘dialectical materialism’ and the praxis
orientations of Antonio Gramsci, has a strong focus on emancipation and the overcoming of
power imbalances. Participatory Action Research, often found in liberationist movements and
international development circles, and Feminist Action Research both strive for social
transformation via an advocacy process to strengthen peripheral groups in society.

Educational Action Research

A fourth stream, that of Educational Action Research, has its foundations in the writings of John
Dewey, the great American educational philosopher of the 1920s and 30s, who believed that
professional educators should become involved in community problem-solving. Its practitioners,
not surprisingly, operate mainly out of educational institutions, and focus on development of
curriculum, professional development, and applying learning in a social context. It is often the
case that university-based action researchers work with primary and secondary school teachers
and students on community projects.

Action Research Tools

Action Research is more of a holistic approach to problem-solving, rather than a single method
for collecting and analyzing data. Thus, it allows for several different research tools to be used
as the project is conducted. These various methods, which are generally common to the
qualitative research paradigm, include: keeping a research journal, document collection and
analysis, participant observation recordings, questionnaire surveys, structured and unstructured
interviews, and case studies.
59
Evaluation research

Evaluation research can be defined as a type of study that uses standard social research methods
for evaluative purposes, as a specific research methodology, and as an assessment process that
employs special techniques unique to the evaluation of social programs. After the reasons for
conducting evaluation research are discussed, the general principles and types are reviewed.
Several evaluation methods are then presented, including input measurement, output/
performance measurement, impact/outcomes assessment, service quality assessment, process
evaluation, benchmarking, standards, quantitative methods, qualitative methods, cost analysis,
organizational effectiveness, program evaluation methods, and LIS-centered methods. Other
aspects of evaluation research considered are the steps of planning and conducting an evaluation
study and the measurement process, including the gathering of statistics and the use of data
collection techniques. The process of data analysis and the evaluation report are also given
attention. It is concluded that evaluation research should be a rigorous, systematic process that
involves collecting data about organizations, processes, programs, services, and/or resources.
Evaluation research should enhance knowledge and decision making and lead to practical
applications.

What is evaluation research?

Evaluation research is not easily defined. There is not even unanimity regarding its name; it is
referred to as evaluation research and evaluative research. Some individuals consider evaluation
research to be a specific research method; others focus on special techniques unique, more often
than not, to program evaluation; and yet others view it as a research activity that employs
standard research methods for evaluative purposes. Consistent with the last perspective, Childers
concludes, "The differences between evaluative research and other research center on the
orientation of the research and not on the methods employed". When evaluation research is
treated as a research method, it is likely to be seen as a type of applied or action research, not as
basic or theoretical research

Types of evaluation research

Before selecting specific methods and data collection techniques to be used in an evaluation
study, the evaluator, according to Wallace and Van Fleet (2001), should decide on the general
approach to be taken. They categorize the general approaches as ad hoc/as needed/as required or
evaluation conducted when a problem arises; externally centered, or evaluation necessitated by
the need to respond to external forces such as state library and accrediting agencies; internally
centered, or evaluation undertaken to resolve internal problems; and research centered, or
evaluation that is conducted so that the results can be generalized to similar environments. Other
broad categories of evaluation that can encompass a variety of methods include macroevaluation,
microevaluation, subjective evaluation, objective evaluation, formative evaluation (evaluation of
a program made while it is still in progress), and summative evaluation (performed at the end of
a program). The Encyclopedia of Evaluation (Mathison, 2004) treats forty-two different
evaluation approaches and models ranging from "appreciative inquiry" to "connoisseurship" to
"transformative evaluation."

Evaluation methods
60
Having decided on the general approach to be taken, the evaluator must next select a more
specific approach or method to be used in the evaluation study. What follows are brief overviews
of several commonly used evaluation methods or groups of methods.

Input Measurement

Input measures are measures of the resources that are allocated to or held by an organization and
represent the longest-standing, most traditional approach to assessing the quality of organizations
and their resources and services. Examples of input measures for libraries include the number of
volumes held, money in the budget, and number of staff members. By themselves they are more
measurement than true evaluation and are limited in their ability to assess quality.

Output/Performance Measurement

Output or performance measures serve to indicate what was accomplished as a result of some
programmatic activity and thus warrant being considered as a type of evaluation research. Such
measures focus on indicators of library output and effectiveness rather than merely on input; are
closely related to the impact of the library on its community; and, as is true for virtually all
evaluation methods, should be related to the organization's goals and objectives.

As was just indicated, one critical element of performance measurement is effectiveness; another
is user satisfaction. In addition to user satisfaction, examples of performance/output measures
include use of facilities and equipment, circulation of materials, document delivery time,
reference service use, subject search success, and availability of materials. The Association of
Research Libraries (2004) identified the following eight output measures for academic libraries:
ease and breadth of access, user satisfaction, teaching and learning, impact on research, cost
effectiveness of library operations and services, facilities and space, market penetration, and
organizational capacity. One could argue that not all of those eight measures represent true
performance or output measures, but they are definitely measures of effectiveness.

Impact/Outcomes Assessment

The input or resources of a library are relatively straightforward and easy to measure. True
measurement of the performance of a library is more difficult to achieve, and it is even more
challenging to measure impact/outcomes or how the use of library and information resources and
services actually affects users. Rossi, Lipsey, and Freeman (2004) point out that outcomes must
relate to the benefits of products and services, not simply their receipt (a performance measure).
However, given the increasing call for accountability, it is becoming imperative for libraries to
measure outcomes or impact. Indeed, "outcomes evaluation has become a central focus, if not the
central focus, of accountability-driven evaluation" (Patton, 2002, p. 151).

Some authors use the terms impact and outcome synonymously; others see them as somewhat
different concepts. Patton (2002, p. 162) suggests a logical continuum that includes inputs,
activities and processes, outputs, immediate outcomes, and long-term impacts. Bettor and
McClure, in a 2003 article in Library Trends (pp. 599-600), identified six types of outcomes:

Economic: outcomes that relate to the financial status of library users.


61
Learning: outcomes reflecting the learning skills and acquisition of knowledge of users.
Research: outcomes that include, for example, the impacts of library services and resources on
the research process of faculty and students.
Information Exchange: outcomes that include the ability of users to exchange information with
organizations and other individuals.
Cultural: the impact of library resources and services on the ability of library users to benefit
from cultural activities.
Community: outcomes that affect a local community and in turn affect the quality of life for
members of the community.

Cost Benefit Analysis

What is benefit-cost analysis?

A benefit-cost analysis is a systematic evaluation of the economic advantages (benefits) and


disadvantages (costs) of a set of investment alternatives. Typically, a “Base Case” is compared to
one or more Alternatives (which have some significant improvement compared to the Base
Case). The analysis evaluates incremental differences between the Base Case and the
Alternative(s). In other words, a benefit-cost analysis tries to answer the question: What
additional benefits will result if this Alternative is undertaken, and what additional costs are
needed to bring it about?

The objective of a benefit-cost analysis is to translate the effects of an investment into monetary
terms and to account for the fact that benefits generally accrue over a long period of time while
capital costs are incurred primarily in the initial years. The primary transportation-related
elements that can be monetized are travel time costs, vehicle operating costs, safety costs,
ongoing maintenance costs, and remaining capital value (a combination of capital expenditure
and salvage value). For some kinds of projects, such as bypasses, travel times and safety may
improve, but operating costs may increase due to longer travel distances. A properly conducted
benefit-cost analysis would indicate whether travel time and safety savings exceed the costs of
design, construction, and the long-term increased operating costs.

Benefit-cost analyses have been used as a tool by project managers to help evaluate preliminary
concepts during early planning studies, to evaluate alternatives and select a Preferred Alternative
as part of project environmental documentation, and to evaluate potential design and
construction staging options as part of detailed design and/or construction. A benefit-cost
analysis provides monetary measure of the relative economic desirability of project alternatives,
but decision-makers often weigh the results against other non-monetized effects and impacts of
the project, such as environmental effects

How does benefit-cost analysis fit into the project development process?

A benefit-cost analysis is a tool for assisting project managers when they are evaluating and
comparing different alternatives. Alternative comparisons are done at different points in the
project development process, including: concept development, environmental documentation,
design, and construction. Results from a benefit-cost analysis, along with public input and
environmental documentation, can be used to evaluate both the monetized and non-monetized
effects and impacts of alternatives when a decision needs to be made.
62
Although the benefit-cost analysis always tries to answer the question, “From an economic
perspective, are the benefits worth the investment?” This question is posed in different ways at
different points in the project development process.
Project planning: From an economic perspective, are the benefits of building a road worth the
project costs (compared to the current system)?
Design and environmental study: From an economic perspective, are the benefits of location “A”
worth the project costs? How does location “A” compare to “B” or “C”?
Construction planning: From an economic perspective, are the benefits of closing some or all
lanes during construction worth the traffic delay and diversion costs (compared to keeping some
lanes open)?
In principle, an ideal benefit-cost analysis would project and evaluate all possibilities, but this is
neither possible nor practical, since it would involve large uncertainties.

63
Qualitative Research

64
Qualitative Research

Qualitative Research is collecting, analyzing, and interpreting data by observing what people do
and say. Whereas, quantitative research refers to counts and measures of things, qualitative
research refers to the meanings, concepts, definitions, characteristics, metaphors, symbols, and
descriptions of things.

Qualitative research is much more subjective than quantitative research and uses very different
methods of collecting information, mainly individual, in-depth interviews and focus groups. The
nature of this type of research is exploratory and open-ended. Small numbers of people are
interviewed in-depth and/or a relatively small number of focus groups are conducted.

Participants are asked to respond to general questions and the interviewer or group moderator
probes and explores their responses to identify and define people’s perceptions, opinions and
feelings about the topic or idea being discussed and to determine the degree of agreement that
exists in the group. The quality of the finding from qualitative research is directly dependent
upon the skills, experience and sensitive of the interviewer or group moderator.

This type of research is often less costly than surveys and is extremely effective in acquiring
information about people’s communications needs and their responses to and views about
specific communications.

Basically, quantitative research is objective; qualitative is subjective. Quantitative research seeks


explanatory laws; qualitative research aims at in-depth description. Qualitative research
measures what it assumes to be a static reality in hopes of developing universal laws. Qualitative
research is an exploration of what is assumed to be a dynamic reality. It does not claim that what
is discovered in the process is universal, and thus, replicable. Common differences usually cited
between these types of research include.

Characteristics

Purpose: Understanding - Seeks to understand people’s interpretations.

Reality: Dynamic - Reality changes with changes in people’s perceptions.

Viewpoint: Insider - Reality is what people perceive it to be.

Values: Value bound - Values will have an impact and should be understood and taken into
account when conducting and reporting research.

Focus: Holistic - A total or complete picture is sought.

Orientation: Discovery - Theories and hypotheses are evolved from data as collected.

Data: Subjective - Data are perceptions of the people in the environment.

Instrumentation: Human - The human person is the primary collection instrument.

65
Conditions: Naturalistic - Investigations are conducted under natural conditions.

Results: Valid - The focus is on design and procedures to gain "real," "rich," and "deep" data.

Advantages

 Produces more in-depth, comprehensive information.


 Uses subjective information and participant observation to describe the context, or natural
setting, of the variables under consideration, as well as the interactions of the different
variables in the context. It seeks a wide understanding of the entire situation
Disadvantages

 The very subjectivity of the inquiry leads to difficulties in establishing the reliability and
validity of the approaches and information
 It is very difficult to prevent or detect researcher induced bias.
 Its scope is limited due to the in-depth, comprehensive data gathering approaches
required.

Maintaining The Validity Of Qualitative Research

Be a listener - The subject(s) of qualitative research should provide the majority of the research
input. It is the researcher’s task to properly interpret the responses of the subject(s).
Record accurately - All records should be maintained in the form of detailed notes or electronic
recordings. These records should also be developed during rather than after the data gathering
session.
Initiate writing early - It is suggested that the researcher make a rough draft of the study before
ever going into the field to collect data. This allows a record to be made when needed. The
researcher is more prepared now to focus the data gathering phase on that information that will
meet the specific identified needs of the project.
Include the primary data in the final report - The inclusion of primary data in the final report
allows the reader to see exactly the basis upon which the researcher’s conclusions were made. In
short, it is better to include too much detail than too little.
Include all data in the final report - The researcher should not leave out pieces of information
from the final report because she/he cannot interpret that data. In these cases, the reader should
be allowed to develop his/her conclusions.
Be candid - The researcher should not spend too much time attempting to keep her/his own
feelings and personal reactions out of the study. If there is relevance in the researcher’s feelings
to the matter at hand, these feelings should be revealed.
Seek feedback - The researcher should allow others to critique the research manuscript
following the developmental process. Professional colleagues and research subjects should be
included in this process to ensure that information is reported accurately and completely.
Attempt to achieve balance - The researcher should attempt to achieve a balance between
perceived importance and actual importance. Often, the information reveals a difference in
anticipated and real areas of study significance.
Write accurately - Incorrect grammar, misspelled words, statement inconsistency, etc.
jeopardize the validity of an otherwise good study.

Assumptions Underlying Qualitative Methods


66
 Multiple realities exist in any given situation -- the researcher’s, those of the individuals
being investigated, and the reader or audience interpreting the results; these multiple
perspectives, or voices, of informants (i.e., subjects) are included in the study.
 the researcher interacts with those he studies and actively works to minimize the distance
between the researcher and those being researched
 the researcher explicitly recognizes and acknowledges the value-laden nature of the
research
 research is context-bound
 research is based on inductive forms of logic; categories of interest emerge from
informants (subjects), rather than being identified a priori by the researcher
 the goal is to uncover and discover patterns or theories that help explain a phenomenon of
interest; and
 determinations of accuracy involve verifying the information with informants or
"triangulating" among different sources of information (e.g., collecting information from
different sources)

Qualitative Methods

Three general types of qualitative methods:


 Case Studies - In a case study the researcher explores a single entity or phenomenon (‘the
case’) bounded by time and activity (e.g., a program, event, institution, or social group)
and collects detailed information through a variety of data. Collection procedures over a
sustained period of time. The case study is a descriptive record of an individual's
experiences and/or behaviors kept by an outside observer.
 Ethnographic Studies - In ethnographic research the researcher studies an intact cultural
group in a natural setting over a specific period of time. A cultural group can be any
group of individuals who share a common social experience, location, or other social
characteristic of interest.
 Phenomenological Studies - In a phenomenological study, human experiences are
examined through the detailed description of the people being studied -- the goal is to
understand the ‘lived experience’ of the individuals being studied. This approach
involves researching a small group of people intensively over a long period of time.

Concept Mapping

"Concept-Mapping" is a tool for assisting and enhancing many of the types of thinking and
learning that we are required to do at university. To do a Map, write the main idea in the centre
of the page -- it may be a word, a phrase, or a couple of juxtaposed ideas, for example -- then
place related ideas on branches that radiate from this central idea.

How to do a Map

 Print in capitals, for ease of reading. This will also encourage you to keep the points
brief.

67
 Use unlined paper, since the presence of lines on paper may hinder the non-linear process
of Mapping. If you must use lined paper, turn it so the lines are vertical.
 Use paper with no previous writing on it.
 Connect all words or phrases or lists with lines, to the centre or to other "branches."
When you get a new idea, start again with a new "spoke" from the centre.
 Go quickly, without pausing -- try to keep up with the flow of ideas. Do not stop to
decide where something should goi.e. to order or organize material -- just get it down.
Ordering and analyzing are "linear" activities and will disrupt the Mapping process.
 Write down everything you can think of without judging or editing -- these activites will
also disrupt the Mapping process.
 If you come to a standstill, look over what you have done to see if you have left anything
out.
 You may want to use color-coding, to group sections of the Map.

Some Organizational Patterns That May Appear in a Concept-Map

 Branches. An idea may branch many times to include both closely and distantly related
ideas.
 Arrows. You may want to use arrows to join ideas from different branches.
 Groupings. If a number of branches contain related ideas, you may want to draw a circle
around the whole area.
 Lists.
 Explanatory/Exploratory notes. You may want to write a few sentences in the Map itself,
to explain, question, or comment on some aspect of your Map -- for example, the
relationship between some of the ideas.

Advantages of Mapping

Mapping may be seen as a type of brainstorming. Both Mapping and brainstorming may be used
to encourage the generation of new material, such as different interpretations and viewpoints:
however, Mapping relies less on intentionally random input, whereas, during brainstorming, one
may try to think up wild, zany, off-the-wall ideas and connections. Brainstorming attempts to
encourage highly divergent "lateral" thinking, whereas Mapping, by its structure, provides
opportunity for convergent thinking, fitting ideas together, as well as thinking up new ideas,
since it requires all ideas to be connected to the centre, and possibly to one another.
Paradoxically, the results of brainstorming usually appear on paper as lists or grids -- both
unavoidably linear structures: top to bottom, left to right. Mapping is less constrictive -- no idea
takes precedence arbitrarily (eg. by being at the "top" of the list).

Here are some advantages of Mapping, which will become more apparent to you after you have
practiced this technique a few times:

 It clearly defines the central idea, by positioning it in the centre of the page.
 It allows you to indicate clearly the relative importance of each idea.
 It allows you to figure out the links among the key ideas more easily. This is particularly
important for creative work such as essay writing.
 It allows you to see all your basic information on one page.
68
 As a result of the above, and because each Map will look different, it makes recall and
review more efficient.
 It allows you to add in new information without messy scratching out or squeezing in.
 It makes it easier for you to see information in different ways, from different viewpoints,
because it does not lock it into specific positions.
 It allows you to see complex relationships among ideas, such as self-perpetuating systems
with feedback loops, rather than forcing you to fit non-linear relationships to linear
formats, before you have finished thinking about them.
 It allows you to see contradictions, paradoxes, and gaps in the material -- or in your own
interpretation of it -- more easily, and in this way provides a foundation for questioning,
which in turn encourages discovery and creativity.

Uses of Mapping

Summarizing Readings
Summarizing is important for at least two reasons:
1. it aids memory,
2. it encourages high-level, critical thinking, which is so important in university work.

Use Mapping in the following ways, to summarize an article, or a chapter in a book:

 Read the introduction and conclusion of the article, and skim it, looking at sub-headings,
graphs, and diagrams.
 Read the article in one sitting. For longer material, "chunk" it -- into chapters, for
example -- and follow this procedure for each chunk.
 Go back over the article until you are quite familiar with its content. (This is assuming
that it will be useful and relevant to your work -- one would not wish to spend this
amount of work on peripheral material).
 Do a Map as described above, from memory. Do not refer to the article or lecture notes
while you are doing the Map if you do, you will disrupt the process.
 Look over what you have done. It should be apparent if you do not understand, or have
forgotten, anything. Refer back to the source material to fill in the gaps, but only after
you have tried to recall it without looking.
 Up to this point, the Map is made up of information derived from what you have read. If
you want to add your own comments, you can differentiate them by using a different
colored pen -- or you could make a whole new Map. This is useful if you want to go more
deeply into the material -- to help to remember or apply it, or to work on an essay. (See
the section on "Working on an Essay," below.)
 Now, ask questions about the material on the Concept-Map:
– How do the parts fit together?
– Does it all make sense? why, or why not?
– Is there anything missing, unclear, or problematic about it?
– How does it fit with other course material? How does it fit with your personal
experience? Are there parts that do not fit? Why not?
– What are the implications of the material?
– Could there be other ways of looking at it?
– Is the material true in all cases?

69
– How far does its usefulness extend?
– What more do you need to find out?

Of course, not all of these questions will apply to every Map; however, the more closely you
look at the material, the more questions will come to you. Try to think of the central, most
important question about the material: if something does not make sense, or seems unresolved,
try to state explicitly why, in what way, there is a problem. This may be difficult to do, but it is
worth the effort, because it will make it easier for you to find an answer.

Summarizing Lectures

Some people use Mapping to take lecture notes. If you find that this works for you, by all means
do it: however, if it does not work, you can certainly take lecture notes as you normally would,
and summarize them later (as soon as possible after the lecture) in the way described above. Be
sure to do this first from memory -- then check it over for accuracy. If possible, give yourself
adequate time to do this -- the more time you spend, the better your retention will be. However,
even a brief summary will have very beneficial effects for your memory, and your overall
understanding of the material -- its salient points and how they fit together.

Making Notes in a Seminar or Workshop

A seminar differs from a lecture in that it lays more emphasis on process: in a more-or-less open-
ended discussion among all members of the group, there is a less linear progression of ideas than
there is in a lecture. A Map can be useful for keeping track of the flow of ideas in such a context,
and for tying them together and commenting on them.

Reviewing for an Exam

Mapping can be a productive way to study for an exam, particularly if the emphasis of the course
is on understanding and applying abstract, theoretical material, rather than on simply reproducing
memorized information. Doing a Map of the course content can point out the most important
concepts and principles, and allow you to see the ways in which they fit together. This may also
help you to see your weak areas, and help you to focus your studying.

Working on an Essay

Mapping is a particularly powerful tool to use during the early stages of writing an essay, before
you write the first rough draft. When you start out exploring material that may be useful for your
essay, you can summarize your readings -- using Mapping, as described above -- to help discover
fruitful areas of research. Finding a suitable thesis is a process of exploration and approximation,
and later on, insight. You may want to look for something that you find interesting and somehow
problematical, with implications beyond itself that you can explore.

It is often difficult to find a powerful thesis for an essay; hence, there is an inevitable, often
unpleasant, and occasionally lengthy, period of confusion. During this period, to progress toward
a resolution, it is necessary to know where you stand:
– what you know;
– what your specific questions are;
70
– what your own opinions or interpretations of the material are;
– whether own opinions are applicable or should be questioned.

Sampling

It is incumbent on the researcher to clearly define the target population. There are no strict rules
to follow, and the researcher must rely on logic and judgment. The population is defined in
keeping with the objectives of the study.

Sometimes, the entire population will be sufficiently small, and the researcher can include the
entire population in the study. This type of research is called a census study because data is
gathered on every member of the population.

Usually, the population is too large for the researcher to attempt to survey all of its members. A
small, but carefully chosen sample can be used to represent the population. The sample reflects
the characteristics of the population from which it is drawn.

Sampling methods are classified as either probability or nonprobability. In probability samples,


each member of the population has a known non-zero probability of being selected. Probability
methods include random sampling, systematic sampling, and stratified sampling. In
nonprobability sampling, members are selected from the population in some nonrandom manner.
These include convenience sampling, judgment sampling, quota sampling, and snowball
sampling. The advantage of probability sampling is that sampling error can be calculated.
Sampling error is the degree to which a sample might differ from the population. When inferring
to the population, results are reported plus or minus the sampling error. In nonprobability
sampling, the degree to which the sample differs from the population remains unknown.

Random sampling is the purest form of probability sampling. Each member of the population
has an equal and known chance of being selected. When there are very large populations, it is
often difficult or impossible to identify every member of the population, so the pool of available
subjects becomes biased.

Systematic sampling is often used instead of random sampling. It is also called an Nth name
selection technique. After the required sample size has been calculated, every Nth record is
selected from a list of population members. As long as the list does not contain any hidden order,
this sampling method is as good as the random sampling method. Its only advantage over the
random sampling technique is simplicity. Systematic sampling is frequently used to select a
specified number of records from a computer file.

Stratified sampling is commonly used probability method that is superior to random sampling
because it reduces sampling error. A stratum is a subset of the population that share at least one
common characteristic. Examples of stratums might be males and females, or managers and non-
managers. The researcher first identifies the relevant stratums and their actual representation in
the population. Random sampling is then used to select a sufficient number of subjects from each
stratum. "Sufficient" refers to a sample size large enough for us to be reasonably confident that
the stratum represents the population. Stratified sampling is often used when one or more of the
stratums in the population have a low incidence relative to the other stratums.

71
Convenience sampling is used in exploratory research where the researcher is interested in
getting an inexpensive approximation of the truth. As the name implies, the sample is selected
because they are convenient. This nonprobability method is often used during preliminary
research efforts to get a gross estimate of the results, without incurring the cost or time required
to select a random sample.

Judgment sampling is a common nonprobability method. The researcher selects the sample
based on judgment. This is usually and extension of convenience sampling. For example, a
researcher may decide to draw the entire sample from one "representative" city, even though the
population includes all cities. When using this method, the researcher must be confident that the
chosen sample is truly representative of the entire population.

Quota sampling is the nonprobability equivalent of stratified sampling. Like stratified sampling,
the researcher first identifies the stratums and their proportions as they are represented in the
population. Then convenience or judgment sampling is used to select the required number of
subjects from each stratum. This differs from stratified sampling, where the stratums are filled by
random sampling.

Snowball sampling is a special nonprobability method used when the desired sample
characteristic is rare. It may be extremely difficult or cost prohibitive to locate respondents in
these situations. Snowball sampling relies on referrals from initial subjects to generate additional
subjects. While this technique can dramatically lower search costs, it comes at the expense of
introducing bias because the technique itself reduces the likelihood that the sample will represent
a good cross section from the population.

Data Collection Methods

Data collection and analysis methods for impact evaluation vary along a continuum. At the one
end of this continuum are methods relying on random sampling; structured data collection
instruments that fit diverse experiences into predetermined response categories; and statistical
data analysis. These methods, generally associated with quantitative research, produce results
that are easy to summarize, compare, and generalize.

At the other end of the continuum are methods typically associated with qualitative research.
These methods are characterized by the following attributes:

 they tend to be open-ended and have less structured protocols (i.e., researchers may
change the data collection strategy by adding, refining, or dropping techniques or
informants)
 they rely more heavily on iterative interviews; respondents may be interviewed several
times to follow up on a particular issue, clarify concepts or check the reliability of data
 they use triangulation to increase the credibility of their findings (i.e., researchers rely on
multiple data collection methods to check the authenticity of their results)
 generally their findings are not generalizable to any specific population, rather each case
study produces a single piece of evidence that can be used to seek general patterns among
different studies of the same issue.

72
In between the two extremes, there is a number of possible evaluation methodologies combining
different aspects (sample design, research protocol, data collection and data analysis) of the
quantitative and qualitative approaches.

Evaluations can also rely on participatory methods. These tend to be closer to the qualitative than
to the quantitative research approach. However, not all qualitative methods are participatory, and
inversely, many participatory techniques can be quantified. The participatory approach is very
much action-oriented. Thus, stakeholders themselves are responsible for collecting and analyzing
the information, and for generating recommendations for change. The role of an outside
evaluator is to facilitate and support this learning process.

By combining these different approaches, one can enrich the design as well as interpretation or
explanation of outcomes measured by the evaluation.

Qualitative methods

Qualitative methods for data collection play an important role in impact evaluation by providing
information useful to understand the processes behind observed results and assess changes in
people’s perceptions of their well-being. Furthermore qualitative methods can be used to
improve the quality of survey-based quantitative evaluations by helping generate evaluation
hypothesis; strengthening the design of survey questionnaires and expanding or clarifying
quantitative evaluation findings.

The qualitative methods most commonly used in evaluation can be classified in three broad
categories:
In-depth interview
Observation method
Document review

73

Você também pode gostar