Você está na página 1de 28

Formally, a theory is defined as an integrated set of defined concepts and statements that

present a view of a phenomenon and can be used to describe, explain, predict, and/or control that
phenomenon.
Theories are tested through research to determine the correctness of their descriptions,
explanations, predictions, and strategies to control outcomes. These theories are developed as the
outcome of a qualitative study rather than as a guide to the development of a study, which occurs
in quantitative research.
Theories have been developed in nursing to explain phenomena important to clinical
practice.

Attachment Theory
Attachment is an emotional bond to another person. Psychologist John Bowlby was the
first attachment theorist, describing attachment as a "lasting psychological connectedness
between human beings" (Bowlby, 1969, p. 194). Bowlby believed that the earliest bonds formed
by children with their caregivers have a tremendous impact that continues throughout life.
According to Bowlby, attachment also serves to keep the infant close to the mother, thus
improving the child's chances of survival.
The central theme of attachment theory is that mothers who are available and responsive
to their infant's needs establish a sense of security. The infant knows that the caregiver is
dependable, which creates a secure base for the child to then explore the world.

Characteristics of Attachment
• Safe Haven: When the child feel threatened or afraid, he or she can return to the caregiver
for comfort and soothing.

• Secure Base: The caregiver provides a secure and dependable base for the child to
explore the world.

• Proximity Maintenance: The child strives to stay near the caregiver, thus keeping the
child safe.

• Separation Distress: When separated from the caregiver, the child will become upset and
distressed.
Human beings have sought to acquire knowledge through experience, authority,
deductive reasoning, inductive reasoning, and the scientific approach. The scientific approach is
widely regarded as the single most reliable source of new knowledge.

Acquiring knowledge through traditions, authority, borrowing, trial and error, personal
experience, role modelling, intuition, and reasoning is important in nursing. However, these
ways of acquiring knowledge are inadequate in providing a scientific and holistic., as well as
process oriented and outcomes focused; thus a variety of research methods are needed to
generate this knowledge. This section introduces quantitative, qualitative, and outcomes research
methods that have been used to generate knowledge for nursing practice.

Most contemporary theories posit stages of information acquisition that follow the steps
set out by Aristotle. These stages equate to Inductive and Deductive Reasoning, respectively
(Johnson-Laird and Byrne, 1991). Induction is defined as a process whereby from sensible
singulars, perceived by the senses, one arrives at universal concepts and principles held by the
intellect. Thus, from the sense experience of even a single yellow tulip, the intellect grasps that it
is a special kind, a kind found in every single tulip. The person proves not only that he sees the
tulip but also that he knows what kind of thing the tulip is by the following. He is able to point
out all the others of the same kind. If the individual did not know the essence or whatness
existing in each tulip, he could not group them together. Deduction, is defined as "the human
process of going from one thing to another, i.e., of moving from the known to the unknown ...
Utilising what he knows, the human being is able to move to what he doesn't see directly. In
other words, the rational person by means of what he already knows, is able to go beyond his
immediate perception and solve very obscure problems. This is the nature of the reasoning
process: to go from the known to the unknown."

When seeking to solve a problem, which includes finding explanations for phenomena of
interest, we must make our way through these three stages. Stage One, Simple Apprehension,
comprises an attempt to discover the "whatness", or fundamental nature of an object. Stage Two,
Judgement, combines and divides concepts inherent in the description of the nature of the object.
It establishes general principles common to all instances of a given class of object, and these will
enable us to combine information derived from Stage One to establish certain facts. Stage Three,
Reason, moves us beyond what is known to the unknown. That is to say, we infer the existence
of the unknown from what we do know about the object
. Introduction

Evidence based practice, is the use of the best clinical evidence in making patient care decisions
and such evidence typically comes from research conducted by nurses and other health care
professionals. Research is systematic inquiry that uses disciplined methods to answer questions
or solve problems. Nursing research has experienced remarkable growth in the past three
decades, providing nurses with an increasingly sound evidence base from which to practice. Yet
many questions endure and much remains to be done to incorporate research based evidence into
nursing practice. The authenticity of the research findings, need to be assessed by careful critical
analysis as to broaden the understanding, determine evidence for use in practice and provide a
background for conducting further study.

II. Definition of critique

1. A research critique is a careful appraisal of the strengths and weaknesses of the study
2. An intellectual research critique is a careful, complete examination of a study to judge its
strengths, weaknesses, logical links, meaning and significance
3. The process of objectivity and critically evaluating a research report’s content for
scientific merit and application to practice, theory or education.
III. Steps in conducting research critique

1. Read and critique the entire study. A research critique involves examining the quality of
all steps of the research process
2. Examine the organization and presentation of the research report. A well prepared report
is complete, concise, clearly presented and logically organized. It does not involve
excessive jargon that is difficult for students and practicing nurses to read. The reference
need to be complete and presented in a consistent manner.
3. Examine the significance of the problem studied for nursing practice. The focus of
nursing studies needs to be on the significant practice problems if a sound knowledge
base is to be developed for the profession.
4. Identify strengths and weakness of a study. All studies have strengths and weaknesses, so
attention must be given to all aspects of the study.
5. Be objective and realistic in identifying the study’s strength and weaknesses. Be balanced
in the critique. Try not to be overly critical in identifying a study’s weaknesses or overly
flattering in identifying strengths
6. Provide specific examples of the strengths and weaknesses of a study. Examples provide
evidence for your critique of the strengths and weaknesses of a study.
7. Provide a rationale for your critique. Include justifications for the critique and document
ideas with sources from the current literature. This strengthens the quality of the critique
and documents the use of critical thinking skills.
8. Suggest modifications for future studies. Modifications in future studies will increase the
strengths and decrease the weaknesses identified in the present study.
9. Discuss the feasibility of replication of the study. Is the study presented in enough detail
to be replicated?
10. Discuss the usefulness of the findings for practice. The findings from the study need to be
linked to the findings of previous studies. All those findings need to be examined for use
in clinical practice.

IV. Phases of research critique BACK TO TOP

• Phases of research critique are described briefly here:

A. QUALITATIVE RESEARCH

When critiquing them, one must examine differences between the qualitative approaches like
grounded theory, phenomenology, or ethnography. Five standards have been developed to
evaluate qualitative studies:

1. Descriptive vividness:

The study purpose, significance and interpretations must be articulated in such detail and
richness that the reader has the sense of personally experiencing the event and clearly understand
the significance of the findings. The threats to descriptive vividness include:

1. Failure to include essential descriptive information


2. Lack of clarity in description
3. Inadequate interpretative/analytic skill (what is most essential, characteristic and defining
about a given phenomenon

Guidelines:

• Was the significance of the study adequately described?


• Was the purpose of the study clearly described?
• Were the interpretations presented in a descriptive way that illuminated more than the
quotes did?

2. Methodological Congruence:

It requires knowledge of the methodological approach the researchers used and whether that
approach was consistent with the philosophical basis of the study. Methodological excellence has
four dimensions

a. Adequate documentation of the participants: Requires a detailed description of the study


participants, rationale for why and how the participants were selected and a description of the
context and location where the study was conducted. Threats include:

• Failure to describe the participants in detail


• Failure to provide a rationale for selecting the participants
• Failure to describe the context or location of the study so that others can determine if the
findings are applicable to their setting.

b. Careful attention to the procedural approach: How careful the researcher is in applying
selected procedures for the study? To the extent possible the researcher must clearly state the
steps that were taken to ensure that data were accurately recorded and the data obtained are
representative of the data as whole. Examine the description of assumptions, the data collection
process, role of researcher for threats to the procedural approach. Threats include

• Failure to articulate the assumptions associated with the research


• Failure to establish trust with the participants, open dialogue and a conversational
approach to data collection.
• Failure to ask appropriate questions that address the participant’s beliefs, experiences,
values or perceptions.
• Failure to adequately describe the data collection process
• Failure to spend adequate time gathering data or to conduct multiple interviews
• Failure to describe the data collection procedures used by multiple data collectors
• Failure to use appropriate process for selecting and gaining access to participants
• Failure to detail the role of the researcher during the interview process
• Failure to describe the qualitative expertise of the researchers

Guidelines:
1. Did the researcher identify the philosophical or theoretical base of the study?
2. Were the assumptions underlying the study articulated? Were the assumptions and data
collection procedures congruent?
3. Was adequate trust established with the participants? Was there an open dialogue with a
conversational approach to data collection?
4. Were research questions articulated? Did the researcher ask questions that explore
participant’s experiences, beliefs, values or perceptions?
5. Was the data collection process adequately described?
6. Did the researcher spend sufficient time with participants gathering data? Did the
researcher conduct multiple interviews?
7. Was the approach of multiple data collectors similar?
8. Was the method of selecting and gaining access to the study participants reasonable?
9. Was the role of the researcher during the interview process described? Were the
researcher’s qualitative credentials and expertise described?

c. Adherence to ethical standards: requires recognition and discussion by the researcher of the
ethical implications related to the study. The report must indicate that the researcher took action
to ensure that the rights of the participants were protected during the study. Examine the data
gathering process and identify potential threats which include:

• Failure to inform participants of their rights


• Failure to obtain informed consent from the participants
• Failure to protect participant rights

d. Auditability: The research report needs to be sufficiently detailed to allow a second researcher
with a similar background and philosophical approach, using the original data and the decision
trail, to arrive at conclusions similar to those of the original researcher. Threats:

• Failure of the researcher to record the nature of the decisions made, the data on which
they are based and the decision trail, rules for arriving at conclusions. Other researchers
with a similar background and philosophical background are not able to arrive at similar
conclusions after applying the decision rules to the data.
• Failure to include enough participant quotes to support the findings. The interpretative
statements developed do not correspond with the findings.
• Failure to provide quotes that are sufficiently rich or detailed to allow judgments to be
made. This flaw also has been described as not achieving saturation or redundancy in the
data
• Failure to provide specific examples of the phenomenon being investigated.

Guidelines:

1. Was the decision trial used in arriving at conclusions described in adequate detail? Can
the findings be linked with the data?
2. Were enough participant quotes included to support the findings?
3. Were the data sufficiently rich to support the conclusions? Were the findings validated by
data? Did the participants describe specific examples of the phenomenon being
investigated?

3. Analytical and interpretative preciseness:

The analytical process involves a series of interpretations and transformations during which
concrete data are transformed across several levels of abstraction. The outcome imparts meaning
to the phenomenon under study. It requires that the researcher involve others in the interpretative
process and present a meaningful picture of the phenomenon under study. Threats include:

• Failure to present the findings in a way that yields a meaningful picture of the
phenomenon under study.
• Failure to return the findings to participants or experts in the area or to readers who
determine if the results are consistent with common meanings and understandings.
• Failure to involve two or more researchers in data analysis or to describe how
disagreements about data analysis were handled.

4. Philosophical or theoretical connectedness:

Requires that the findings developed from the study be clearly expressed, logically consistent
and compatible with the knowledge base of nursing. Study assumptions, methodological
procedures and interpretative/analytic approach must be consistent with the philosophical or
theoretical basis of the study. Threats are:

• Failure to link data to nursing practice


• Failure to identify a philosophical or theoretical basis for the study
• Failure to cite references for the philosophical or theoretical approach used
• Failure to link the philosophical or theoretical basis of the study with the study
assumptions, data collection procedures and analytical and interpretative approach.

Guidelines:

1. Was a clear connection made between the data and nursing practice?
2. Did the researcher identify the philosophical or theoretical basis for the study? Were
citations provided for the philosophical or theoretical approach used?
3. Was the theoretical and philosophical basis of the study consistent with the study
assumptions, data collection process and analysis and interpretative methods used? Were
citations provided for the philosophical or theoretical approach used?

5. Heuristic relevance:

It is reflected in the reader’s ability to recognize the phenomenon described in the study, its
applicability to nursing practice and its influence on future research. The dimensions include:
a. Intuitive recognition: Readers immediately recognize the phenomenon, its connection to their
personal experience and its relationship to nursing practice. Threat includes failure to present the
findings in a way in which the reader can recognize them as being consistent with common
meanings and experiences. Guidelines are:

• Can the reader recognize the phenomenon described in the study?


• Are the findings consistent with the common meanings or experiences?

b. Relationship to the existing body of knowledge: Similarities between the current knowledge
base and the study findings add strength to the findings. The researcher needs to explore reasons
for differences. Examine the degree to which the authors compared and contrasted the study
findings with the results of other researcher’s work. Threats include:

• Failure to examine existing body of knowledge


• Failure to compare and contrast the study findings with those of other studies.
• Failure to describe the lacunae or omissions in current understandings that would account
for unique findings.

c. Applicability to nursing practice, research and education: In the discussion section examine
implications of study findings and suggestions for future research. Threats include:

• Failure to link study findings to nursing practice, research or education


• Failure to emphasize how the findings extended what was previously reported in the
literature
• Failure to identify implications of the study for related cases
• Failure to summarize suggestions for future research
Inferential statistics are used to draw inferences about a population from a sample. Consider an
experiment in which 10 subjects who performed a task after 24 hours of sleep deprivation scored
12 points lower than 10 subjects who performed after a normal night's sleep. Is the difference
real or could it be due to chance? How much larger could the real difference be than the 12
points found in the sample? These are the types of questions answered by inferential statistics.

There are two main methods used in inferential statistics: estimation and hypothesis testing. In
estimation, the sample is used to estimate a parameter and a confidence interval about the
estimate is constructed.

In the most common use of hypothesis testing, a "straw man" null hypothesis is put forward and
it is determined whether the data are strong enough to reject it. For the sleep deprivation study,
the null hypothesis would be that sleep deprivation has no effect on performance.

 Descriptive statistics are for describing data on the group you study. Example: Babbie
and Halley's survey for describing your own class.
 Inferential statistics are for generalizing your findings to a broader population group.
Example: Babbie and Halley's analysis of SPSS data that can be generalized to the
population at large.
INFERENTIAL:

A researcher conducts a study to determine the effectiveness of computerized tutoring on


the scores of seventh graders on a standardized math achievement test.

- The researcher for the school district randomly selects 60 seventh graders
from a population of 175 seventh graders enrolled in an after-school tutoring
program. 30 students are randomly assigned to the experimental group and
30 students to the control group.
- The researcher would want to know if similar results would be obtained if
the entire population was used in the study.
- The researcher concluded that the difference between the sample means is
too small to conclude that the population means differ.

e.g.,100congestive heart failure (CHF) patients and extrapolates the results to larger populations
(such as 1 million CHF patients in the world).

• for descriptive statistics, correlations or other relationships are tabulated for the datasets
themselves and are factual statements. For example, you might say, “in these 50 patients
with poor night vision who were enrolled in the study, we found that those who ate
carrots saw twice as well as those who did not.”
• For inferential statistics, correlations or other relationships are assessed to determine
whether they are convinving enought o draw some conclusions about the patient
population at large that they are drawn from. For example, you might say, “based on the
50 patients in the study, we can conclude that if anyone, who has poor night vision, eats
carrots will be able to see twice as well.
-----------------------------------------------------------------

B. QUANTITATIVE RESEARCH

There are four critical thinking phases namely comprehension, comparison, analysis and
evaluation. Comparison and analysis are done simultaneously.

1. Comprehension:

• understanding the terms and concepts in the report, identifying the elements or steps of
the research process such as problem, purpose, framework and design. It grasps the
meaning, nature and significance of these steps.

2. Comparison:

• requires what each step of the research process should be like and then the ideal is
compared to the real Examine the extent to which the researcher followed the rules of an
ideal study.

3. Analysis:

• Involves critique of the logical links connecting one study element with another. The
steps of the research process need to be precisely developed and strongly linked to each
other to conduct a quality study.

4. Evaluation:

• The meaning and significance of the study are examined. The evaluation becomes a
summary of the study’s quality that builds on conclusions reached during the first three
phases. Guidelines are:

As the critique is on a quantitative study for the assignment, the guidelines are presented in detail
below.

V. Guidelines for a quantitative study critique: BACK TO TOP

I. Title:

• Is the title a good one, succinctly suggesting key variables and the study population?

II. Abstract:

• Does the abstract clearly and concisely summarize the main features (problem, methods,
research conclusions)
III. Introduction BACK TO TOP

I. Problem statement:

a. What is the study problem? Is it easy to locate?

b. Is the problem stated clearly and unambiguously? Is it easy to identify?

c. Does the problem statement build a cogent and persuasive argument for the new study? Has
the research problem been substantiated with adequate experiential and scientific background
material?

d. Does the problem stated, expresses a relationship between two or more variables or at least
between dependent and independent variable, implying empirical testability?

e. Does the problem specify the nature of the population being studied?

f. Does the problem have significance and relevance to nursing? Is the quantitative/ qualitative
approach appropriate?

g. Is there a match between the research problem and paradigm and methods used?

h. Is the problem sufficiently narrow in scope without being trivial?

i. Was this study feasible to conduct in terms of money commitment; the researcher’s expertise;
availability of subjects, facility, equipment; ethical considerations?

j. Has the research problem been placed within the context of an appropriate theoretical
framework?

k. Does the statement or purpose specify the nature of the population being studied?

ii. Purpose:

a. What is the study purpose?

b. Does the purpose narrow and clarify the focus or aim of the study and identify the research
variables, population and setting?

c. Is it worded appropriately? Are verbs used appropriately to suggest the nature of the inquiry
and or the research tradition?

iii. Objectives:

a. Formally stated? Clearly and concisely stated?


b. Logically linked to purpose?

c. Linked to concepts and relationships from the framework?

d. Measurable or potentially measurable and achievable?

e. Do they clearly identify the variables and population studied?

iv. Hypotheses:

Are they:

a. Properly worded?

b. Stated objectively without value laden words?

c. Stating a predictive relationship between variables?

d. Stated in such a way that they are testable?

e. Directional or non directional/ research or statistical? Is the direction clearly stated? Are they
causal, associative or simplex versus complex?

f. Is there a rationale for how they were stated?

g. Clearly and concisely expressed with variables and study population?

h. Logically linked to the research problem and purpose?

i. Linked to concepts, relationships from the framework and literature review?

j. Used to direct the conduct of the study?

k. Absent? If so is their absence justified? Are statistical tests used in analyzing the data despite
the absence of stated hypothesis?

l. Derived from a theory or previous research? Is there a justifiable basis for the predictions?

m. Specific to one relationship so that each hypothesis can be either supported or not supported?

v. Conceptual framework:

a. Is the study framework identified? Is a particular theory or model identified as a framework


for the study?

b. Is the framework explicitly expressed or must be extracted from the literature review?
c. Does the absence of a framework detract from the usefulness or significance of the research?

d. Does the framework describe and define the concepts of interest or major features of the
theory/ model so that readers can understand the conceptual basis of the study?

e. Does the framework present the relationships among the concepts?

f. Is a map or model of the framework provided for clarity? If a map or model is not provided,
develop one that presents the study’s framework and describe it.

g. If there was an intervention, was there a cogent theoretical basis or rationale for the
intervention?

h. Is the theory or model used as the basis for generating hypothesis that were tested or is it used
as an organizational or interpretive framework? Was this appropriate?

i. Is the theory/ model appropriate for the research problem? Would a different framework have
been fitting?

j. Are deductions from theory are logical?

k. Links the concepts in the framework, with the variables in the study

l. Is the framework presented with clarity?

m. Are the concepts adequately defined in way that is consistent with the theory? If there is an
intervention, are intervention components consistent with the theory?

n. Do the problem and hypothesis naturally flow from the framework? Or is the link contrived?

o. Is the framework related to nursing’s body of knowledge? Is it based on a conceptual model of


nursing or a model developed by nurses? Is it borrowed from another discipline, is there
adequate justification for its use?

p. Is the framework linked to the research purpose?

q. Is there a link between the framework, concepts being studied and the methods of
measurement

r. If the proposition from a theory is to be tested, is the proposition clearly identified and linked
to the study hypotheses?

s. Was sufficient literature presented to support study of the selected concepts?

t. Did the framework guide the study methods?


u. Does the researcher tie the findings of the study back to the framework at the end of the
report? How do the findings support or undermine the framework? Are the findings interpreted
within the context of the framework?

vi. Variables

a. Do the variables reflect the concepts identified in the framework?

b. Are the variables clearly defined (conceptually and operationally) based on previous research
and or theories?

c. Is the conceptual definition of a variable consistent with the operational definition? Do the
theoretical definitions correspond to the conceptual definitions?

d. Are the variables that are manipulate or measured in the study consistent with the variables
identified in the purpose or the objectives, hypothesis?

e. Are the major variables or concepts identified and defined (conceptually and operationally)?
Identify and define the appropriate variables included in the study: Independent variables,
Dependent variables, Research variables or concepts

f. What attribute or demographic variables are examined in the study?

g. Were the extraneous variables identified and controlled as necessary in the study?

h. Are there uncontrolled extraneous variables that may have influenced the findings? Is the
potential impact of these variables on the findings discussed?

IV. Review of literature BACK TO TOP

a. Is the literature review presented? Does it reflect critical thinking?

b. Are all relevant concepts and variables included in the review?

c. Are relevant previous studies (including, from other disciplines) identified and described?

d. Are relevant theories and models identified and described?

e. Are the references current? Examine the number of sources in the last five and ten years in the
reference list.

f. Is the review thorough? Does it identify/uncover the gaps or inconsistencies in literature?

g. Is the review up-to-date?

h. Is it based on primary sources? Are secondary sources cited?


i. Provides a state of the art synthesis of evidence on the research problem?

j. Does it provide solid basis for the new study? Does the summary of the current empirical and
theoretical knowledge provide a basis for the study

k. Are the studies critiqued by the author?

l. Is a summary of the current knowledge provided? This summary needs to include what is
known and not known about the research problem.

m. Does the critique of each reviewed study include strengths, weakness, limitations of the
design; conflicts; essential components of the design like size and type of sample, instruments its
validity and reliability

n. Is the review well organized, flow logically, written concisely? Is the development of ideas
clear to demonstrate the progressive development of ideas through previous research?

o. Is the review objective?

p. Is there use of appropriate language?

q. If it is a review designed to summarize evidence for clinical practice. Does the review, draw
appropriate conclusions about practice implications?

r. Is a theoretical knowledge base developed for the problem and purpose? Does it follow the
purposes of the study?

s. Does the literature review provide a rationale and direction for the study?

t. Are both conceptual and data based literature included?

u. Is there a written summary synthesis of the reviewed scholarly literature?

v. Does the summary follow a logical sequence that leads reader to reasons why the particular
research or non research project is needed?

V. Methodology: BACK TO TOP

i. Ethical considerations:

a. Are the rights of human subjects protected?

b. Were appropriate procedures used to safeguard the rights of study participants used?

c. Was the study subjected to an external review? Was the study approved and monitored by an
institutional review board, research ethics board or other similar ethics review committee?
d. Was the study designed to minimize risks and maximize benefits to participants? Did the
benefits outweigh any potential risks or actual discomfort they experienced?

e. Did the benefits to society outweigh the costs to participants?

f. Was any undue coercion or undue influence used to recruit participants? Did they have the
right to refuse to participate or to withdraw without penalty?

g. Were the study participants subjected to any physical harm, discomfort or psychological
distress? Did the researchers take appropriate steps to remove or prevent harm?

h. Were participants deceived in anyway? Were they fully aware of participating in a study and
did they understand the purpose and nature of research?

i. Were the subjects informed about the purpose and nature of the study?

j. Were appropriate informed consent procedures used with all subjects? Was the information
essential for the consent provided? If not were there valid and justifiable reasons. Were the
subjects capable of comprehending the information, competent to give consent? Did it seem that
the subjects participated voluntarily?

k. Were adequate steps taken to safeguard the privacy of the participants. How data was kept
anonymous or confidential? Was a certificate of confidentiality obtained?

l. Were vulnerable groups involved in the research? If yes, were special precautions instituted
because of their vulnerable status?.

m. Were groups omitted from the enquiry without a justifiable rationale?

n. Discuss the institutional review board approval obtained from University/agency where the
study was conducted

ii. Design: BACK TO TOP

a. Is the research design clearly addressed? Identify the specific design of the study. Is the design
employed appropriate?

b. Does the research question imply a question about the causal relationship between the
independent and dependent variables?

c. What would be strongest design for the research question? How does this compare to the
design actually used? Was the most rigorous possible design used, given the purpose of the
research?

d. Does the researcher use the various concepts of control that are consistent with the type of
design chosen?
e. Does the design seem to reflect the issues of economy?

f. What elements are controlled? What elements could have been controlled to improve the
design?

g. What was the feasibility of controlling particular elements of the study? What was the effect
of not controlling these elements on the validity of the study findings?

h. Were appropriate comparisons made to enhance interpretability of findings?

i. What elements of the design wee manipulated and how were they manipulated? How adequate
was the manipulation? What elements should have been manipulated to improve the validity of
findings?

j. Does the design used seem to flow from the proposed research problem, theoretical
framework, literature review and the hypothesis?

k. What are the threats to internal and external validity?

l. What are the controls for the threats of internal and external validity?

m. Does the study include a treatment or intervention? If so is the treatment is clearly defined
conceptually and operationally? Clearly described and consistently implemented? Was the
control of comparison condition adequately explained? What justification from the literature
provided for development of the experimental intervention? Was the intervention best that could
be provided given current knowledge?

n. Does the study report, who implemented the treatment? If more than one person were they
trained to ensure consistency in the delivery of the treatment? Was any control or comparison
group intervention described?

o. Was there a protocol developed to ensure consistent or reliable implementation of the


treatment with each subject throughout the study? Was an intervention theory provided to
explain why the intervention causes the outcomes and exactly how the intervention produced the
desired effects?

p. If experimental (or quasi) study, what specific experimental (or quasi) design was used? Were
randomization procedures adequately explained? Is there adequate justification for failure to
randomize subjects to treatment conditions? What evidence does the report provide that any
groups being compared were equivalent before interventions?

q. If the study was non experimental, was the study inherently non experimental? What was the
design used? If retrospective, was there adequate justification for failure to use prospective
design? What evidence does the report provide that any groups being compared were similar
with regard to important extraneous characteristics?
r. If the study has more than one group, how were the subjects assigned to groups?

s. What type of comparisons are specified in the design (before-after, between groups)? Do these
comparisons adequately illuminate the relationship between the independent and dependent
variables? If there are no comparisons, or flawed comparisons, how does this affect the integrity
of the study and the interpretability of the results?

t. Was the study longitudinal? Was the timing of the collection of data appropriate? Was he
number of data collection points reasonable?

u. Was masking and blinding used at all? If yes who was blinded and was this adequate? If not
was there an adequate rationale for failure to mask? Is the intervention such that could raise
expectation that in and of themselves could alter the outcomes? Did the design minimize biases
and threats to the internal and external validity of the study?

v. Are the extraneous variables identified and controlled?

w. Were pilot study findings used to design the major study? Briefly discuss the pilot study and
the findings. Indicate the changes made in the major study based on the pilot.

x. Is the design logically linked to the sampling method and statistical analyses?

y. Does the design provide a mean to examine all of the objectives, questions or hypothesis and
the study purpose?

iii. Setting:

Discuss the setting and whether it was appropriate for the conduct of the study.

iv. Population and Sample:

a. Was the population identified and described? Was the sample described in sufficient detail? Is
the target population to which the findings will be generalized defined?

b. Was the best possible sampling design was used to enhance sample’s representativeness?
Were sample biases minimized? What was the possibility of type II error?

c. Is the sampling method adequate to produce a sample that is representative of the study
population? Is the sample representative of accessible and target population?

d. Was the sample size adequate? Identify the sample size. Indicate if a power analysis was
conducted to determine sample size

e. Identify the inclusion and exclusion sample criteria. Are the sample selection procedures
clearly delineated?
f. Indicate the method used to obtain the sample. Did the researchers identify the sampling frame
for the study?

g. Do the sample and population specifications support an inference of construct validity with
regard to population construct?

h. What type of sampling plan was used? What alternative sampling plan have been preferable?
Was it the one that could be expected to yield a representative sample?

i. How were subjects recruited into the sample? Does the method suggest potential biases?

j. Did some factor other than the sampling plan affect the representativeness of the ample?

k. Are possible sample biases or weaknesses identified? What are the potential biases in the
sampling method

l. Is the sample sufficiently large to support statistical conclusion validity? Was the sample size
justified on the basis of a power analysis or other rationale?

m. Does the sample support inferences about external validity? To whom can the study results
reasonably be generalized?

n. Are key characteristics of the sample described (female or male percentage, mean age etc.)

o. What number and percentage of the potential subjects refused to participate? Identify the
sample mortality or attrition from the study. If so are justifications given?

p. If more than one group is used do the groups appear equivalent?

q. Have sample delimitations been established?

r. Would it be possible to replicate the study population? Does the researcher indicate how
replication of the study with other samples would provide increased support for the findings?

v. Instrument/tools: BACK TO TOP

a. Are all of the measurement strategies /instruments identified and described? Identify the
author of each measurement strategy. Identify the type of each measurement strategy (Likert,
visual analogue, physiological measurement, questionnaire, interview, observation). Is there
rationale for their selection given?

b. Is the method used appropriate to the problem being studied? Were the methods used
appropriate to the clinical situation? Are they similar for all subjects?

c. Identify the level of measurement (nominal, ordinal, interval or ration) achieved with each
instrument. Discuss how each study instrument was developed.
d. Report the reliability and validity of each instrument or scale from previous studies and the
current study. Discuss the precision and accuracy of the physiological measurement methods
used in a study.

e. Was the set of data collection instruments adequately pretested?

f. Do the instruments adequately measure the study variables? Were key variables
operationalized using the best possible method( e.g. interviews, observations and so on) and
with adequate justifications? Determine whether the type of measurement is direct or indirect.

g. Are the specific instruments adequately described in terms of reading level of questions,
length of time to complete it, number of modules included and so on? Were they good choices,
given the study purpose and study population? Was the mode of obtaining data was appropriate?
(in person interview, mailed questionnaire, internet questioning)

h. Were self report data gathered in a manner that promoted high quality and unbiased responses(
e.g. Privacy, efforts to put respondents at ease)

i. If observational data were used did the report adequately describe what specific constructs
were observed? What was the unit of observation – was the approach molar or molecular?

j. Does the report provide evidence that data collection methods yielded data that were high on
reliability and validity?

k. Are the instruments sufficiently sensitive to detect differences between subjects?

l. Is the validity and reliability of the instruments adequate for use in the study? Does the report
offer evidence of the validity and reliability of measures? Does the evidence come from the
research sample itself or is it based on other studies? If the latter is it reasonable to conclude that
data quality would be similar for the research sample as for the reliability sample?

m. If validity and reliability is reported, which method of validity and reliability appraisal have
been used? Was it appropriate? Is the reliability sufficiently high or the validity appear adequate?
Should another method would have been used?

n. Do the instruments need further research to evaluate validity and reliability? If no information
on validity and reliability, what conclusions can be reached on the quality of data?

Scales and Questionnaires:

• Are the instruments clearly described? Described well enough to know whether it covers
the subject?
• Are techniques to administer, complete and score instruments provided?
• Is the validity and reliability of the instruments described?
• Did the researcher examine the reliability and the validity of the instruments for the
present sample?
• If the instrument was developed for the study, is the instrument process development
described?
• Are the majority of the items appropriately close or open ended?
• Is there a clear indication that the subjects understood the questionnaire?
• Is there evidence that subjects were able to perform the task?
• ·Observation:
• Is what is to be observed clearly identified and defined?
• Are interrater and intrarater reliability described?
• Are the techniques for recording observations described?
• Was there an observational guide?
• Is there any reason to believe that the presence of the observers affected the behavior of
the subjects?
• Were observations performed using the principles of informed consent?
• Was the researcher required to make inferences about what they saw?

Interviews:

• Is the interview schedule described adequately enough to know whether it covers the
subject?
• Do the interview questions address concerns expressed in the research problem?
• Are the interview questions relevant for research purpose and objectives, questions or
hypotheses?
• Does the design of the questions tend to bias subject’s responses?
• Does the sequence of questions tend to bias subjects responses?
• Is there clear indication that the subjects understood the task and questions?

Physiological measure:

• Are the measures or instruments clearly described? If appropriate are the brand names
identified?
• Is the instrument used appropriate to the research problem and forced to fit it?
• Is a rationale given for why a particular instrument was selected?
• Is there a provision for evaluating the accuracy of the instrument and those who use it?
• Are the accuracy, precision, selectivity, sensitivity, error of the physiological instruments
discussed?
• Are the methods for recording data from the physiological measures clearly described?

Available data and records:

• Are the records used appropriate to the problem being studied?


• Are the data examined in such a way as to provide new information and not summarize
the records?
• Has the author addressed questions of internal and external criticism?
• Is there any indication of selection bias in the available records?

Focus groups:
• What was the aim of the focus group?
• Was he group size appropriate for the focus group method?
• Was he group sufficiently homogeneous for its members to speak candidly?
• Was the moderator successful in keeping the discussion focused?
• Was the aim of the focus group achieved?
• Did the conclusions appear to be valid representation of the discussion?
• Were minority positions identified and explored?

Rating scale/ semantic differential scales/visual analogue scales:

• Is the instrument clearly described?


• Are the techniques that were used to administer and score the scale provided?
• Is information about validity and reliability of the scale described from previous studies
or present sample? Was the instrument development process described, if scale was
developed for the study?

vi. Data collection:

a. Did the researcher make the right decision about collecting new data versus existing data for
the study?

b. Did the researcher make good data collection decisions with regard to structure, quantification,
researcher obtrusiveness and objectivity?

c. Were the right methods used to collect the data? Was triangulation of methods used
appropriately – were multiple methods used sensibly? Are the data collection procedures were
same for all subjects?

d. Was the right amount of data collected? Were data collected to address the varied needs of the
study? Were too many data were collected in terms burdening study participants? And is so how
this might have affected data quality?

e. Did the researcher use good instruments, in terms of congruence with underlying constructs,
data quality, reputation, efficiency and so on? Were new instruments developed unnecessarily?

f. Did the report provide adequate information about data collectors and data collection
procedure? Is the data collection process clearly described?

g. Is the data collection process conducted in a consistent manner? Are the data collection
methods ethical? Do the data collected address the research objectives, questions or hypotheses?

h. Who collected the data? Were data collectors judiciously chosen? Did they have traits that
undermined the collection of unbiased, high quality data or did their traits enhance data quality?
i. Was the training of data collectors described? Was the training adequate? Were steps taken to
improve the data collector’s ability to elicit or produce high quality data or to monitor their
performance?

j. Where and under what circumstances were data gathered? Was the setting for data collection
appropriate?

k. Were other people present during data collection? Could the presence of others have resulted
in any biases?

l. Were data collected in a manner that minimized bias? Did the intervention group did receive
in intervention?

m. Was a category system or rating system used to organize and record observations? Were
decisions about exhaustiveness and degree of observer inference appropriate?

n. What methods were used to sample observational units? Was the sampling approach good
one? Did it likely to yield a representative sample of behavior? To what degree were observer
biases controlled or minimized?

o. Were biophysiologic measures used in the study and was this appropriate? Were appropriate
methods used to measure the variables of interest? Did the researcher appear to have the skills
necessary for proper interpretation of bio-physiologic measures?

VI. Data analysis. BACK TO TOP

a. Are data analysis procedures clearly described? What statistical analyses are included in the
research report? Identify the analysis techniques used to describe the sample

b. Do data analyses address each objective, Question or hypothesis?

c. Are data analyses procedures appropriate to the type of data collected?

d. Are the results presented in an understandable way?

e. Are tables and figures used to synthesize and emphasize certain findings? Do the tables/
graphs figures used agree with the text and extend it or do they merely repeat it? Were the tables,
graphs, pictures clear, with a good title, carefully labeled headings.

f. Were appropriate descriptive statistics used? What descriptive statistics were reported? Do
these statistics describe the major characteristics of the data set?

g. What level of measurement is used to measure each of the major variables? Were these
descriptive statistics appropriate to the level of measurement of each variable?

h. Were any risk indexes computed? If not should they have been?
i. Is there appropriate summary statistics for each major variable?

j. Was the most powerful analytic method was used? Were type I and II errors were avoided or
minimized?

k. Does the level of measurement and sample size permit the use of parametric statistics?

l. Are the statistics used appropriate to the problem, the hypothesis, the method, the sample and
the level of measurement?

m. If non parametric tests were used was a rationale provided and does the rationale seem sound?
Should more powerful parametric procedures have been used instead?

n. Are the results for each of the hypotheses presented appropriately? Are the tests that were used
to analyze the data presented?

o. Is the information regarding the results presented, concise and sequential? Is the result
interpreted in light of the hypotheses and theoretical framework an all the steps that preceded the
results? Do the findings support the study framework?

p. Are the results clearly and completely stated? Presented objectively? Is there enough
information to judge the results?

q. Was the level of significance or alpha identified? If so indicate the level. Identify the focus
(description, relationship, differences) of each analysis technique, statistical procedures, test
statistic, specific results, specific probability value in a table form

r. Are significant and nonsignificant findings explained? If the results were nonsignificant, was
the sample size sufficient to detect significant differences? Was a power analysis conducted to
examine nonsignificant findings?

s. Are the analyses interpreted appropriately? Does the interpretation of findings appear biased?
Are the biases in the study identified?

t. Are there uncontrolled extraneous variables that may have influenced the findings? Do the
conclusions fit the results from the analyses? Are the conclusions based on statistically and
clinically significant results?

u. Were the statistically significant findings also examined for clinical significance? Is a
distinction made between practical significance and statistical significance? How?

v. What conclusions did the researcher identify based on this study and previous research? Are
any generalizations made, how did the researcher generalize the findings? Are the
generalizations within the scope of the findings or beyond the findings?
w. Are findings reported in manner that facilitates a meta-analysis and with sufficient
information needed for evidence based practice? Are the findings adequately summarized?

VII. Discussion: BACK TO TOP

a. What is the researcher’s interpretation of findings? Are all important results discussed? If not
what is the likely explanation for omissions?

b. Did the researcher identify and discuss important study limitations and their effects on the
results??

c. Are there inconsistencies of the report? Are the findings consistent with the results and with
study’s limitations? Do the interpretations suggest distinct biases?

d. Are all major findings interpreted and discussed within the context of prior research and or the
study’s conceptual framework? Are the findings consistent with previous research findings

e. Does the report address the issue of the generalizability of the findings? Are generalizations
made that are not warranted on the basis of the sample used? Which findings are unexpected?

f. Are alternative explanations for the findings mentioned and is the rationale for their rejection
presented?

g. Does the interpretation distinguish between practical and statistical significance? Are any
unwarranted interpretations of causality made?

h. Do the researchers discuss the study’s implications for clinical practice, nursing education,
nursing administration, nursing theory or make specific recommendations? What implications do
the findings have for nursing practice? Are they reasonable and complete?

i. Are given implications appropriate given the study’s limitations and given the body of
evidence from other studies? Are there important implications that the report neglected to
include?

j. What suggestions/recommendations are made for further studies?

k. What are the missing elements of the study? Is the description of the study sufficiently clear to
allow replication?

VII. Application and utilization: BACK TO TOP

a. How much confidence can be placed in the study findings? Are the findings an accurate
reflection of reality? Do the study appear valid?
b. Are the findings related to the framework? Are the findings linked to those of previous
studies? Are there other studies with similar findings? What do the findings add to the current
body of knowledge? To what populations can the findings be generalized?

c. What research questions emerge from the findings? Are these questions identified by the
researcher?

d. What is the overall quality of the study when strengths and weaknesses are summarized?
Could any of the weaknesses have been corrected? Do the strengths outweigh the weaknesses?

e. Do the findings have potential for use in nursing practice? What risk/ benefit are involved for
patients if the research findings would be used in practice?

f. Can the study be replicated by other researchers? Did the researcher use sound methodology?
Do the findings accurately reflect reality? Are the findings credible?

g. Is direct application of the research findings feasible in terms of time, effort money and legal
and ethical crisis? How and under what circumstances are the findings applicable to nursing
practice?

h. Does the study contribute any meaningful evidence that can be used in nursing practice or that
is useful to the nursing discipline?

IX. Researcher credibility and presentation: BACK TO TOP

a. Does the researchers’ clinical, substantive or methodological qualifications and experience


enhance confidence in the findings and their interpretation?

b. Is the report well written, well organized and sufficiently detailed for critical analysis? Is the
report placed logical sequence and useful location?

c. Was the written in a manner that makes the findings accessible to practicing nurses?

VII. Conclusion:

The exercise of a critique was a useful task to apply the knowledge of research. Identifying the
strengths and weaknesses of the study including the constraints and limitations, helped to review
the research process. The exercise gives a room for thoughtfulness and to hold the analysis in
practical terms. Thus the research critique gives room for the authenticity of the information and
to analyze the credibility of the findings and to weigh the evidence base in terms of practicality,
objectivity, utilization, application and replication possibility.
95. Which of the following terms is used to refer to protrusion of
abdominal organs through the surgical incision?
a) Evisceration
Evisceration is a surgical emergency.
b) Hernia
A hernia is a weakness in the abdominal wall.
c) Dehiscence
Dehiscence refers to partial or complete separation of wound edges.
d) Erythema
Erythema refers to redness of tissue.

When the method of wound healing is one in which wound edges


are not surgically approximated and integumentary continuity is
restored by granulations, the wound healing is termed

a) second intention healing.


When wounds dehisce, they will be allowed to heal by secondary
intention.

b) primary intention healing.


Primary or first intention healing is the method of healing in which
wound edges are surgically approximated and integumentary
continuity is restored without granulating.

c) first intention healing.


Primary or first intention healing is the method of healing in which
wound edges are surgically approximated and integumentary
continuity is restored without granulating.

d) third intention healing.


Third intention healing is a method of healing in which surgical
approximation of wound edges is delayed and integumentary
continuity is restored by bringing apposing granulations together.

Você também pode gostar