Você está na página 1de 21

Q.

2
Differentiate between
hypothesis. Give examples.

research

question

and

A research question is a highly focused question that addresses one concept or


component of the hypothesis whereas the hypothesis itself is used to state the
relationship between two variables. The central question is a broad question
that asks for an explorationof the central phenomenon or concept in a study.
The inquirer poses this question, consistent with the emerging methodology of
qualitative research, as a general issue so as to not limit the inquiry. To arrive at
this question, ask, What is the broadest question that I can ask in the study?
Beginning researchers trained in quantitative research might struggle with this
approach because they are accustomed to the reverse approach: identifying
specific, narrow questions or hypotheses based on a few variables. In qualitative
research, the intent is to explore the complex set of factors surrounding the
central phenomenon and present the varied perspectives or meanings that
participants hold. The following are guidelines for writing broad, qualitative
research questions:
Ask one or two central questions followed by no more than five to seven sub
questions.
Research questions and hypotheses narrow the purpose statement and become
major signposts for readers. Qualitative researchers ask at least one central
question and several sub questions. They begin the questions with words such
as how or what and use exploratory verbs, such as explore or describe. They
pose broad, general questions to allow the participants to explain their ideas.
They also focus initially on one central phenomenon of interest. The questions
may also mention the participants and the site for
the research. Quantitative researchers write either research questions or
hypotheses.
Both forms include variables that are described, related, categorized into groups
for comparison, and the independent and dependent variables are measured
separately. In many quantitative proposals, writers use research questions;
however, a more formal statement of research employs hypotheses. These
hypotheses are predictions about the outcomes of the results, and they may be
written as alternative hypotheses specifying the
exact results to be expected (more or less, higher or lower of something). They
also may be stated in the null form, indicating no expected difference or no
relationship between groups on a dependent variable. Typically, the researcher
writes the independent variable(s) first, followed by the dependent variable(s).
One model for ordering the questions in a quantitative proposal is to begin with
descriptive questions followed by the inferential questions that relate variables
or compare groups. Example A Mixed Methods Question Written in Terms of
Mixing Procedures To what extent and in what ways do qualitative interviews
with students and faculty members serve to contribute to a more
comprehensive and nuanced understanding of this predicting relationship

between CEEPT scores and student academic performance, via integrative


mixed methods analysis?
(Lee & Greene, 2007)

Q. 1 Define research and describe basic process of research


Research and experimental development is formal work undertaken
systematically to increase the stock of knowledge, including knowledge of
humanity, culture and society, and the use of this stock of knowledge to devise
new applications (OECD (2002) Frascati Manual: proposed standard practice for
surveys on research and experimental development, 6th edition. [1] It is used to
establish or confirm facts, reaffirm the results of previous work, solve new or
existing problems, support theorems, or develop new theories. A research
project may also be an expansion on past work in the field. To test the validity of
instruments, procedures, or experiments, research may replicate elements of
prior projects, or the project as a whole. The primary purposes of basic research
(as opposed to applied research) are documentation, discovery, interpretation,
or the research and development of methods and systems for the advancement
of human knowledge. Approaches to research depend on epistemologies, which
vary considerably both within and between humanities and sciences.

Introduction to research process


The research process is a simple means of effectively locating information
for a research project, be it a research paper, an oral presentation, or
something else assigned by your professor. Because research is a process,
you will need to allow for ample time to refine or change your topic. Be sure
to allow a few weeks to have materials delivered from other libraries.
Basics of Doing Research
The steps below provide a simple and effective approach for conducting
research for a paper, presentation, or other project that requires you to
locate information about a topic. Depending on your topic and your
familiarity with library research, you may need to rearrange or recycle
these
steps.
Adapt
this
outline
to
your
needs:
Step
Step
Step
Step
Step
Step

1
2
3
4
5

Choose
your
Find
basic
Refine
your
Locate
and
retrieve
Evaluate
relevancies
of
Take

topic.
information.
topic.
materials.
materials.
notes.

Step 7 - Construct your project.

Step 1: Choose Your Topic


Select a topic that is of interest to you, or if you have been assigned a
topic, select an aspect or perspective of the topic that interests you. If you
are having trouble selecting a topic, you may find it useful to browse
magazines, journals, newspapers, reference sources, and online databases.
Remember, selecting a topic is the most important decision you will make
in the research process. Without a topic, you cant go any further.

Step 2: Find Basic Information


Find basic information on your topic. Select a few key terms from your topic
and search for basic information in reference sources such as subject
encyclopedias, bibliographies, handbooks, library catalogs, books, online
databases, and Internet sources (Web sites). This preliminary search will
help you determine how much or how little information is available about
your topic.

Step 3: Refine Your Topic


Based on the quality and number of items located, you may need to refine
your topic. If your initial search renders too little information, try
broadening your topic. You can broaden a topic by searching for related
concepts/synonyms using different keywords, or by selecting different
resources.
If your initial search renders too much information, you will need to
narrow your topic. You can narrow your topic by using more specific terms
and by examining subject headings in books and/or online databases.
Finally, try examining book and article references for additional sources.
If you need assistance with refining your topic, ask a librarian and/or your
professor.

Step 4: Locate and Retrieve Materials


Once you have identified your topic, you can to begin to locate and retrieve

information. Before you begin locating information about your topic, you
will need to identify what information formats (articles, books, websites,
dissertations, etc.,) are needed and select the appropriate research tool(s).
The information format is usually determined by the requirements of your
research assignment or instructor.

Step 5: Evaluate Relevancy of Materials


After locating your information you will need to review them for usefulness
and relevency to your topic. A clearly, well-defined topic allows you to
quickly eliminate irrelevant information. After your determine the relevancy,
you then need to evaluate the quality of your information.
The
(1)

basic

criteria

to

evaluating

information

are

as

follows:

Authority Who is the author? What are their credentials?

(2) Accuracy Are the facts verifiable? Is the information correct?


(3)
(4)

Objectivity
Currency

What

is
Is

the
the

purpose?

Is

information

there

bias?

up-to-date?

(5) Coverage What is the scope of the information? What does it focus
on?

Step 6: Take Notes


Throughout your research process you will need to keep accurate notes of
what research tools and search strategies you usedthis ensures that you
wont retrieve the same information twice, as well as allowing you to
reproduce a particular search if needed. Notate complete citations for all
your information even if you are unsure of whether or not you will use the
information. Trying to locate the information at a later date may be difficult
without a proper citations.
A "complete citation" includes identifying information that allows you to
locate information when needed. Some common citations are formatted
using the APA and MLA style guidelines.

APA - American Psychological Association

MLA - Modern Language Association

Step 7: Construct Your Project


Finally you are now ready to start preparing your paper, presentation, or
project. You should have enough research materials to support your
research topic. Be careful to cite any information that you have "quoted
directly" or "paraphrased", this way you can avoid committing plagiarism.
Remember that research is a circular process, you may need to go back
and locate additional information that your previous search did not locate.
Always give yourself enough time to conduct additional research, if
needed.

Q3. What sources of information are available to assist in the


literature review? Discuss their importance.
A literature review is an evaluative report of information found in the literature related
to your selected area of study. The review should describe, summarise, evaluate and
clarify this literature. It should give a theoretical base for the research and help you
(the author) determine the nature of your research. Works which are irrelevant should
be discarded and those which are peripheral should be looked at critically.

A literature review is more than the search for information, and goes beyond
being a descriptive annotated bibliography. All works included in the review
must be read, evaluated and analysed (which you would do for an annotated
bibliography), but relationships between the literature must also be identified
and articulated, in relation to your field of research.
"In writing the literature review, the purpose is to convey to the reader what
knowledge and ideas have been established on a topic, and what their
strengths and weaknesses are. The literature review must be defined by a
guiding concept (eg. your research objective, the problem or issue you are
discussing, or your argumentative thesis). It is not just a descriptive list of the
material available, or a set of summaries.
Journal articles are the primary vehicle of communication in most science
disciplines and, therefore, journal indexes and e-journal packages are critical
information resources. To identify and remain knowledgeable about the main

indexes to the science literature and many other specific resources for each
science discipline, there are science librarians, library database lists and guides
arranged by subject, and specific literature guide books to assist you. Journal
and bibliographic indexes typically allow keyword searching of article titles,
abstracts, and subject headings or descriptors. E-journal packages (e.g., JSTOR;
ScienceDirect; SpringerLink; Wiley Interscience) often allow keyword searching
of the full text of articles as well. The ability to keyword search full text enables
a new level of searching. While producing many irrelevant results, full-text
searches can also find information not previously discoverable by searching
indexes. Multidisciplinary indexes, such as Google Scholar, Scopus, and Web of
Science, which analyze article bibliographies and track citations (i.e., who is
citing who) are valuable tools for identifying the most cited and presumably
important papers on particular topics. Once a relevant article is found, by
examining its bibliography additional older results can often be identified for
your topic. Conversely, once a relevant article is found, by examining who has
cited it additional newer results can often be identified for your topic. A caveat
is that more recent articles will not have had the time to build up the numbers
of citing articles that older articles have.
Using information sources in a systematic and structured manner will save you
a good deal of time. Developing a search strategy is vital as it provides you
with an overall structure for your search and provides a record of your search
history. This is an extremely useful record to have as you find yourself needing
to refine or change the focus of your searching as your research develops. It
can also improve the relevancy of results obtained as you have thought about
keywords and synonyms and how these relate to each other.

Q 4 Discuss briefly data collection strategies for action


research, ethnographic research and case study with
suitable examples.
Action research, in simple terms, is an approach that teachers and facilitators
employ to
critically evaluate problems encountered in the classroom on a regular basis.
The goal of action research is to discover techniques to improve current
practices through reflection and meta analysis. This systematic research
approach, which is not limited to any specific subject matter, involves
collaborative efforts with other teachers, facilitators, students, and
administrators. The purpose of an action research study is not to publish reports
or validate results through independent research rather to apply and validate
the findings through practice and practical application. Grounded in curriculum
theory, action research involves three primary steps (1) defining a problem, (2)
developing an action plan that includes testing a hypothesis and monitoring
applied changes, and (3) reflecting on and sharing any findings or results
(Arhar, Holly, & Kasten, 2001; Gay & Airasian, 1996; Leedy & Ormrod, 2001;
McKernan, 1996).

This paper reviews three data collection methods (1) interviewing, (2)
observation, and
(3) questionnaires. Descriptions of various techniques used for data collection
provide readers with practical approaches to consider when developing
technical and operational training materials. Interviews are basic fact-finding
interactions where one individual asks questions while another responds. By
conducting interviews, researchers obtain a clearer understanding of an
individuals background and experience. Knowledge of this experience helps the
researcher better understand the context for an individuals behavior and
decision-making rationale (Seidman, 1998). Observation is, in some part, an
intuitive process that allows individuals to collect information about others by
viewing their actions and behaviors in their natural surroundings. The role of the
observer can vary depending upon the level of involvement desired (e.g.,
remain as part of the background, no interaction; or, actively participate with
the group by asking questions and responding). Tools used to collect information
vary depending upon the type of data gathered. Informal observational tools
may include writing field notes, making entries into a log, or keeping a journal.
When observation is more formal, tools such as audiotapes, videotapes,
checklists, and rating scales may be used (Arhar, Holly, & Kasten, 2001).
Action research may employ the use of questionnaires when it is impossible to
interview
every respondent. Questionnaires generally consist of open- or closed-ended
questions or items that measure facts, attitudes, or values. For example, an
action research study might try to collect information about a particular
teachers presentation style after every class for a month. Because this involves
the comments of 150 students regarding 20 lectures, it would be impossible to
interview each one every time. Consequently, a standard questionnaire would
be developed togather the desired information. Both qualitative and
quantitative research use questionnaires to collect data. Closed-ended
questions force a response, score quickly, and are easy to evaluate. To ensure
reliability, inventories often restate the question or item several times. Openended questions allow the participant to provide a more complete or
comprehensive response. Although open-ended responses are difficult to
analyze, they often provide specific and meaningful information (Arhar, Holly, &
Kasten, 2001; Patten, 1998).

Q.5 Why is experimental research more effective than non


experimental research?
Experimental research is unique in two very important respects: It is the only
type of research that directly attempts to influence a particular variable and
when properly applied it is the best type for testing hypotheses about cause
and effect relationships. In an experimental study, researchers look at the
effects of at least one independent variable on one or more dependent
variables. The independent variable in experimental research is also frequently
referred to as the experimental or treatment variable. The dependent
variable, also known as the criterion or outcome variable, refers to the results or
outcomes of the study.

The major characteristic of experimental research, which distinguishes it from


all other types of research, is that researchers manipulate the independent
variable. They decide the nature of the treatment to whom it is to be applied,
and to what extent. Independent variables frequently manipulated in
educational research include methods of instruction, kinds of assignment,
learning materials, rewards given to students and types of questions asked by
the teachers. Dependent variables that are frequently studied include
achievment, interest in a subject, attention span, motivation and attitudes
towards school.
Essential Characteristics of Experimental Research
The word experiment has a long and illustrious history in the annals of
research. It has often been hailed as the most powerful method that exist for
studying cause and effect. Its origins go back to the very beginnings of history
when for example, primaval humans first experimented with ways to produce
fire. One can imagine countless trial and error attempts on their part before
success was achieved by sparking rocks or by spinning wooden spindles in dry
leaves. Much of the success of modern science is due to carefully designed and
meticulously implemented experiments.
The basic idea undrlying all experimental research is really quite simple: try
something and systematically observe what happens. Formal experiments
consist of two basic conditions. First, at least two ( but often more) conditions or
methods are compared to assess the effects of particular conditions or
treatments the independent variable. Second the independent variable is
directly manipulated by the researcher. Change is planned for and deliberately
manipulated in order to study its effects on one or more outcomes( the
dependent variable).

Q 7 How would you define variable? Discuss different kinds


of variables.
A variable is something that can be changed, such as a characteristic or value.
Variables are generally used in psychology experiments to determine if changes
to one thing result in changes to another types of Variables.
Binary variable Obsevations (i.e., dependent variables) that occur in one of
two possible states, often labelled zero and one. E.g., improved/not improved
and completed task/failed to complete task.
Categorical Variable Usually an independent or predictor variable that
contains values indicating membership in one of several possible categories.
E.g., gender (male or female), marital status (married, single, divorced,
widowed). The categories are often
assigned numerical values used as lables, e.g., 0 = male; 1 = female. Synonym
for nominal variable. Confounding variable A variable that obscures the effects
of another variable. If one elementary reading teacher used used a phonics

textbook in her class and another instructor used a whole language textbook in
his class, and students in the two classes were given achievement tests to see
how well they read, the independent variables (teacher effectiveness and
textbooks) would be confounded. There is no way to determine if differences in
reading between the two classes were caused by either or both of the
independent variables. Continuous variable A variable that is not restricted to
particular values (other than limited by the accuracy of the measuring
instrument). E.g., reaction time, neuroticism, IQ. Equal size intervals on different
parts of the scale are assumed, if not demonstrated. Synonym for interval
variable. Control variable An extraneous variable that an investigator does not
wish to examine in a study. Thus the investigator controls this variable. Also
called a covariate.
Criterion variable The presumed effect in a nonexperimental study.
Dependent variable The presumed effect in an experimental study. The values
of the dependent variable depend upon another variable, the independent
variable. Strictly
speaking, dependent variable should not be used when writing about
nonexperimental
designs. Dichotomous variable Synonym for binary variable
Discrete variable Variable having only integer values. For example, number of
trials need by a student to learn a memorization task.
Dummy Variables Created by recoding categorial variables that have more
than two categories into a series of binary variables. E.g., Marital status, if
originally labelled 1=married,
2=single, and 3=divorced, widowed, or separated, could be redefined in terms
of two variables as follows: var_1: 1=single, 0=otherwise. Var_2: 1=divorced,
widowed, or separated, 0=otherwise.
For a married person, both var_1 and var_2 would be zero. In general, a
categorical
variable with k categories would be recoded in terms of k - 1 dummy
variables. Dummy variables are used in regression analysis to avoid the
unreasonable
assumption that the original numerical codes for the categories, i.e., the
values 1, 2, ..., k, correspond to an interval scale. Use: to place cases in specific
groups.
Endogenous variable A variable that is an inherent part of the system being
studied and that is determined from within the system. A variable that is caused
by other variables in a
causal system.
Exogenous variable A variable entering from and determined from outside of
the system being studied. A causal system says nothing about its exogenous
variables. Independent variable The presumed cause in an experimental study.
All other variables that may impact the dependent variable are controlled. The
values of the independent variable are under experimenter control. Strictly
speaking, independent variable should not be used when writing about
nonexperimental designs.
Interval variable Synonym for continuous variable

Intervening variable A variable that explains a relation or provides a causal


link between other variables. Also called by some authors mediating variable
or intermediary variable.
Example: The statistical association between income and longevity needs
to be explained because just having money does not make one live longer.
Other
variables intervene between money and long life. People with high incomes
tend
to have better medical care than those with low incomes. Medical care is an
intervening variable. It mediates the relation between income and longevity.
Latent variable An underlying variable that cannot be observed. It is
hypothesized to exist in
order to explain other variables, such as specific behaviors, that can be
observed. Example: if we observe the voting records of members of the House
of
Representatives on spending bills for the military, foodstamps, law
enforcement,
and promoting business investment, we might find underlying patterns that
could be explained by postulating latent variables such as conservatism and
liberalism.
Manifest variable An observed variable assumed to indicate the presence of a
latent variable. Also known as an indicator variable. We cannot observe
intelligence directly, for it is a latent variable. We can look at indicators such as
vocabulary size, success in ones occupation, IQ test score, ability to play
complicated games (e.g., bridge) well, writing ability, and so on.
Manipulated variable Synonym for independent variable.
Mediating variable Synonym for intervening variable. Example: Parents
transmit their social status to their children directly, but they also do so
indirectly, through education: viz.
Parents status childs education childs status
Moderating variable A variable that influences, or moderates, the relation
between two other variables and thus produces an interaction effect.
Nominal variable Synonym for categorical variable.
Ordinal variable A variable used to rank a sample of individuals with respect
to some characteristics,
but differences (i.e., intervals) and different points of the scale are not
necessarily equivalent. Examples: anxiety might be rated on a scale none,
mild, moderate, and severe, with numerical values of 0, 1, 2, 3. A patient
with an anxiety score of 1 is ranked as less anxious than a patient with a score
of
3, but patients with scores 0 and 2 do not necessarily have the same
differences
in anxiety as patients with scores of 1 and 3.
Outcome variable The presumed effect in a nonexperimental study. Synonym
for criterion variable.
Polychotomous variables
Variables that can have more than two possible values. Strictly speaking, this
includes all but binary variables. The usual reference is to categorical variables

with more than two categories.


Predictor variable The presumed cause on a nonexperimental study. Often
used in correlational
studies. For example, SAT scores predict first semester GPA. The SAT score is
the predictor variable.
Treatment variable Synonym for independent variable

Q 8. What is the difference between descriptive statistics


and inferential statistics?
Descriptive and Inferential Statistics
When analysing data, for example, the marks achieved by 100 students for a
piece of coursework, it is possible to use both descriptive and inferential
statistics in your analysis of their marks. Typically, in most research conducted
on groups of people, you will use both descriptive and inferential statistics to
analyse your results and draw conclusions. So what are descriptive and
inferential statistics? And what are their differences?
Descriptive Statistics
Descriptive statistics is the term given to the analysis of data that helps
describe, show or summarize data in a meaningful way such that, for example,
patterns might emerge from the data. Descriptive statistics do not, however,
allow us to make conclusions beyond the data we have analysed or reach
conclusions regarding any hypotheses we might have made. They are simply a
way to describe our data.
Descriptive statistics are very important, as if we simply presented our raw data
it would be hard to visulize what the data was showing, especially if there was a
lot of it. Descriptive statistics therefore allow us to present the data in a more
meaningful way which allows simpler interpretation of the data. For example, if
we had the results of 100 pieces of students' coursework, we may be interested
in the overall performance of those students. We would also be interested in the
distribution or spread of the marks. Descriptive statistics allow us to do this.
How to properly describe data through statistics and graphs is an important
topic and discussed in other Laerd Statistics Guides. Typically, there are two
general types of statistic that are used to describe data:

Measures of central tendency: these are ways of describing the central


position of a frequency distribution for a group of data. In this case, the
frequency distribution is simply the distribution and pattern of marks
scored by the 100 students from the lowest to the highest. We can
describe this central position using a number of statistics, including the
mode, median, and mean. You can read about measures of central
tendency here.

Measures of spread: these are ways of summarizing a group of data by


describing how spread out the scores are. For example, the mean score of
our 100 students may be 65 out of 100. However, not all students will
have scored 65 marks. Rather, their scores will be spread out. Some will
be lower and others higher. Measures of spread help us to summarize how
spread out these scores are. To describe this spread, a number of
statistics are available to us, including the range, quartiles, absolute
deviation, variance and standard deviation.

When we use descriptive statistics it is useful to summarize our group of data


using a combination of tabulated description (i.e. tables), graphical description
(i.e. graphs and charts) and statistical commentary (i.e. a discussion of the
results).
Inferential Statistics
We have seen that descriptive statistics provide information about our
immediate group of data. For example, we could calculate the mean and
standard deviation of the exam marks for the 100 students and this could
provide valuable information about this group of 100 students. Any group of
data like this, that includes all the data you are interested, in is called a
population. A population can be small or large, as long as it includes all the
data you are interested in. For example, if you were only interested in the exam
marks of 100 students, then the 100 students would represent your population.
Descriptive statistics are applied to populations and the properties of
populations, like the mean or standard deviation, are called parameters as
they represent the whole population (i.e. everybody you are interested in).
Often, however, you do not have access to the whole population you are
interested in investigating but only have a limited number of data instead. For
example, you might be interested in the exam marks of all students in the UK. It
is not feasible to measure all exam marks of all students in the whole of the UK
so you have to measure a smaller sample of students, for example, 100
students, that are used to represent the larger population of all UK students.
Properties of samples, such as the mean or standard deviation, are not called
parameters but statistics. Inferential statistics are techniques that allow us to
use these samples to make generalizations about the populations from which
the samples were drawn. It is, therefore, important the sample accurately
represents the population. The process of achieving this is called sampling
(sampling strategies are discussed in detail here on our sister site). Inferential
statistics arise out of the fact that sampling naturally incurs sampling error and
thus a sample is not expected to perfectly represent the population.

Q 9. Discuss different sampling tachniques

Sampling techniques
What is sampling?

A shortcut method for investigating a whole population

Data is gathered on a small part of the whole parent population or


sampling frame, and used to inform what the whole picture is like

Why sample?
In reality there is simply not enough; time, energy, money, labour/man power,
equipment, access to suitable sites to measure every single item or site within
the parent population or whole sampling frame.
Therefore an appropriate sampling strategy is adopted to obtain a
representative, and statistically valid sample of the whole.
Sampling considerations

Larger sample sizes are more accurate representations of the whole

The sample size chosen is a balance between obtaining a statistically valid


representation, and the time, energy, money, labour, equipment and access
available

A sampling strategy made with the minimum of bias is the most statistically
valid

Most approaches assume that the parent population has a normal


distribution where most items or individuals clustered close to the mean,
with few extremes

A 95% probability or confidence level is usually assumed, for example 95%


of items or individuals will be within plus or minus two standard deviations
from the mean

This also means that up to five per cent may lie outside of this - sampling,
no matter how good can only ever be claimed to be a very close estimate

Sampling techniques
Three main types of sampling strategy:

Random

Systematic

Stratified

Within these types, you may then decide on a; point, line, area method.
Random sampling

Least biased of all sampling techniques, there is no subjectivity - each


member of the total population has an equal chance of being selected

Can be obtained using random number tables

Microsoft Excel has a function to produce random number

The function is simply:

=RAND()

Type that into a cell and it will produce a random number in that cell. Copy the
formula throughout a selection of cells and it will produce random numbers.
You can modify the formula to obtain whatever range you wish, for example if
you wanted random numbers from one to 250, you could enter the following
formula:

=INT(250*RAND())+1

Where INT eliminates the digits after the decimal, 250* creates the range to be
covered, and +1 sets the lowest number in the range.
Paired numbers could also be obtained using;

=INT(9000*RAND())+1000

These can then be used as grid coordinates, metre and centimetre sampling
stations along a transect, or in any feasible way.
Methodology
A. Random point sampling

A grid is drawn over a map of the study area

Random number tables are used to obtain coordinates/grid references for


the points

Sampling takes place as feasibly close to these points as possible

B. Random line sampling

Pairs of coordinates or grid references are obtained using random number


tables, and marked on a map of the study area

These are joined to form lines to be sampled

C. Random area sampling

Random number tables generate coordinates or grid references which are


used to mark the bottom left (south west) corner of quadrates or grid
squares to be sampled

Figure one: A random number grid showing methods of generating random numbers, lines and areas.

Advantages and disadvantages of random sampling


Advantages:

Can be used with large sample populations

Avoids bias

Disadvantages:

Can lead to poor representation of the overall parent population or area if


large areas are not hit by the random numbers generated. This is made
worse if the study area is very large

There may be practical constraints in terms of time available and access


to certain parts of the study area

Systematic sampling
Samples are chosen in a systematic, or regular way.

They are evenly/regularly distributed in


example every two meters along a transect line

They can be at equal/regular intervals in a temporal context, for


example every half hour or at set times of the day

They can be regularly numbered, for example every 10th house or person

spatial

context, for

Methodology
A. Systematic point sampling
A grid can be used and the points can be at the intersections of the grid lines
(A), or in the middle of each grid square (B). Sampling is done at the nearest
feasible place. Along a transect line, sampling points for vegetation/pebble data
collection could be identified systematically, for example every two meters or
every 10th pebble
B. Systematic line sampling
The eastings or northings of the grid on a map can be used to identify transect
lines (C and D) alternatively, along a beach it could be decided that a transect
up the beach will be conducted every 20 metres along the length of the beach
C. Systematic area sampling
A pattern' of grid squares to be sampled can be identified using a map of the
study area, for example every second/third grid square down or across the area
(E) - the south west corner will then mark the corner of a quadrat. Patterns can
be any shape or direction as long as they are regular (F)

Figure two: Systemic sampling grid showing methods of generating systemic


points, lines and areas.
Advantages and disadvantages of systematic sampling
Advantages:

It is more straight-forward than random sampling

A grid doesn't necessarily have to be used, sampling just has to be at


uniform intervals

A good coverage of the study area can be more easily achieved than
using random sampling

Disadvantages:

It is more biased, as not all members or points have an equal chance of


being selected

It may therefore lead to over or under representation of a particular


pattern

Stratified sampling
This method is used when the parent population or sampling frame is made up
of sub-sets of known size. These sub-sets make up different proportions of the
total, and therefore sampling should be stratified to ensure that results are
proportional and representative of the whole.
A. Stratified systematic sampling
The population can be divided into known groups, and each group sampled
using a systematic approach. The number sampled in each group should be in
proportion to its known size in the parent population.
For example: the make-up of different social groups in the population of a town
can be obtained, and then the number of questionnaires carried out in different
parts of the town can be stratified in line with this information. A systematic
approach can still be used by asking every fifth person.
B. Stratified random sampling
A wide range of data and fieldwork situations can lend themselves to this
approach - wherever there are two study areas being compared, for
example two woodlands, river catchments, rock types or a population with subsets of known size, for example woodland with distinctly different habitats.
Random point, line or area techniques can be used as long as the number of
measurements taken is in proportion to the size of the whole.
For example: if an area of woodland was the study site, there would likely be
different types of habitat (sub-sets) within it. Random sampling may altogether
miss' one or more of these.

Stratified sampling would take into account the proportional area of each
habitat type within the woodland and then each could be sampled accordingly;
if 20 samples were to be taken in the woodland as a whole, and it was found
that a shrubby clearing accounted for 10% of the total area, two samples would
need to be taken within the clearing. The sample points could still be identified
randomly (A) or systematically (B) within each separate area of woodland.

Figure three: A diagram highlighting the benefits of using stratified random


sampling and stratified systemic sampling within certain fieldwork sites.
Advantages and disadvantages of stratified sampling
Advantages:

It can be used with random or systematic sampling, and with point, line or
area techniques

If the proportions of the sub-sets are known, it can generate results which
are more representative of the whole population

It is very flexible and applicable to many geographical enquiries

Correlations and comparisons can be made between sub-sets

Disadvantages:

The proportions of the sub-sets must be known and accurate if it is to


work properly. It can be hard to stratify questionnaire data collection,

accurate up to date population data may not be available and it may be


hard to identify people's age or social background effectively.

Q 10. Define research ethics and the approaches used in


ethical issues.
Research ethics involves the application of fundamental ethical principles to
a variety of topics involving scientific research. These include the design and
implementation of research involving human experimentation, animal
experimentation, various aspects of academic scandal, including scientific
misconduct (such as fraud, fabrication of data and plagiarism), whistleblowing;
regulation of research, etc. Research ethics is most developed as a concept in
medical research. The key agreement here is the 1974 Declaration of Helsinki.
The Nuremberg Code is a former agreement, but with many still important
notes. Research in the social sciences presents a different set of issues than
those in medical research.
The scientific research enterprise is built on a foundation of trust. Scientists
trust that the results reported by others are valid. Society trusts that the results
of research reflect an honest attempt by scientists to describe the world
accurately and without bias. But this trust will endure only if the scientific
community devotes itself to exemplifying and transmitting the values
associated with ethical scientific conduct.[1]
There are many ethical issues to be taken into serious consideration for
research. Sociologists need to be aware of having the responsibility to secure
the actual permission and interests of all those involved in the study. They
should not misuse any of the information discovered, and there should be a
certain moral responsibility maintained towards the participants. There is a duty
to protect the rights of people in the study as well as their privacy and
sensitivity. The confidentiality of those involved in the observation must be
carried out, keeping their anonymity and privacy secure. As pointed out in the
BSA for Sociology, all of these ethics must be honoured unless there are other
overriding reasons to do so - for example, any illegal or terrorist activity.
Ethical approach
Ethical scrutiny of research, scientific integrity and good research governance
are increasingly high-profile concerns to research funders and research
organisations. Whilst these concerns are not limited to research involving
human subjects, as a social science based institution, the Institute of Education
takes its responsibilities in this area very seriously.

Following the adoption of a new policy on research ethics in June 2005, we have
an established committee structure ensuring that our governance procedures
are appropriate to our needs and those of our funders. All new staff and
student research proposals at the Institute of Education are required to
undergo an ethics review. A new Code of Practice for Responding to Complaints
by Research Participants about the Ethical Conduct of Research was approved in
June 2007. All our research is conducted with reference to the appropriate
professional code of practice for social research (principally BERA, BPS or BSA).
While sound governance of research is clearly essential, we also expect the
highest possible standards from our researchers. Our ethics review processes
are explicitly designed in order to promote a reflective and proactive approach
to research ethics.

Q 6. Enlist various strategies to analyze data statistically in


quantitative research. Discuss any five in detail.
Data analyses for quantitative studies and qualitative studies is quite different.
Whereas the raw data for quantitative studies are numbers (e.g test scores). The raw
data for qualitative studies are words and possibly visual materials such as photos.
These data include primarily fieldnotes, often supplemented by documents and
interview transcripts. Since the focus is on what people actually said, direct quotations
are preferable to paraphrases or the observers recollections. This is easier said than
done, however. Having the bulk of the data represent quotations usually requires
transcripts of many interview tapes, which is no mean task; it is both immensely time
consuming and expensive. Statistical analysis too often has meant the

manipulation of ambiguous data by means of dubious methods to solve a


problem that has not been defined. The purpose of this article is to provide
readers with definitions and examples of widely used concepts in statistics. This
article first discusses some general principles for the planning of experiments
and data visualization. Then, since we expect that most readers are not
studying this article to learn statistics but instead to find practical methods for
analyzing data, a strong emphasis has been put on choice of appropriate
standard statistical model and statistical inference methods (parametric, nonparametric, resampling methods) for different types of data. Then, methods for
processing multivariate data are briefly reviewed. The section following it deals
with clinical trials. Finally, the last section discusses computer software and
guides the reader through a collection of bibliographic references adapted to
different levels of expertise and topics. Choosing a random sample may not be
easy and there are two types of errors associated with choosing representative
samples: sampling errors and non-sampling errors. Sampling errors are those
errors due to chance variations resulting from sampling a population. For
example, in a population of 100,000 individuals, suppose that 100 have a
certain genetic trait and in a (random) sample of 10,000, 8 have the trait. The

experimenter will estimate that 8/10,000 of the population or 80/100,000


individuals have the trait, and in doing so will have underestimated the actual
percentage. Imagine conducting this experiment (i.e., drawing a random sample
of 10,000 and examining for the trait) repeatedly. The observed number of
sampled individuals having the trait will fluctuate. This phenomenon is called
the sampling error. Indeed, if sampling cis truly random, the observed number
having the trait in each repetition will fluctuate randomly about 10.
Furthermore, the limits within which most fluctuations will occur are estimable
using standard statistical methods. Consequently, the experimenter not only
acknowledges the presence of sampling errors, but he can estimate their effect.
In contrast, variation associated with improper sampling is called non-sampling
error. For example, the entire target population may not be accessible to the
experimenter for the purpose of choosing a sample. The results of the analysis
will be biased if the accessible and non-accessible portions of the population are
different with respect to the characteristic(s) being investigated. Increasing
sample size within the accessible portion will not solve the problem. The
sample, although random within the accessible portion, will not be
representative of the target population. The experimenter is often not aware
of the presence of non-sampling errors (e.g., in the above context, the
experimenter may not be aware that the trait occurs with higher frequency in a
particular ethnic group that is less accessible to sampling than other groups
within the population). Furthermore, even when a source of non sampling error
is identified, there may not be a practical way of assessing its effect. The only
recourse when a source of non-sampling error is identified is to document its
nature as thoroughly as possible. Clinical trials involving survival studies are
often associated with specific non-sampling errors.

Você também pode gostar