Você está na página 1de 7

Analysis

CMAJ

Research methodology series

Unexpected predictoroutcome associations in clinical


prediction research: causes and solutions
Ewoud Schuit MSc, Rolf H.H. Groenwold MD PhD, Frank E. Harrell Jr. MSc PhD, Wim L.A.M. de Kort MD PhD,
Anneke Kwee MD PhD, Ben Willem J. Mol MD PhD, Richard D. Riley MSc PhD, Karel G.M. Moons MSc PhD

linicalpredictionmodelsaimtopredict
individualclinicaloutcomesusingmultiple predictor variables. Prediction
models are abundant in the medical literature
and their number is increasing.13 Established
causal risk factors are often good predictors.
For example, the Framingham Risk Score,
which predicts the 10-year risk of cardiovascular disease, includes the variables blood
pressure and smoking status, which are wellestablished risk factors.4 However, predictors
arenotnecessarilycausallyrelatedtotheclinical outcome, for example, tumour markers in
cancerprogressionorrecurrence.
Even though causality is not certain, clinical
researchersoftenanticipateaparticulardirection
in the relation between predictor and outcome.
For example, higher blood pressure is expected
to increase and not decrease the risk of cardiovascular disease.4 Thus, a negative association
between blood pressure and cardiovascular
disease (suggesting a protective effect) is unexpectedandunlikelytobefoundinanotherpopulation.Itcould,consequently,hampergeneralizability. Moreover, unexpected findings may
suggestthatthepredictionmodelisinvalid,thus
loweringthefacevalidityofthemodel,suchthat
readers and potential users will not trust the
modeltoguidetheirpractice.5
Ouraimistodescribecausesforunexpected
findings in prediction research and to provide
possible solutions. In the first section, we describe 3 clinical examples that will be used for
illustrative purposes throughout the paper.The
subsequent section outlines causes for unexpectedfindingsandtheirpotentialsolutions,followedbyageneraldiscussion.

Clinical examples
Weusedatafrom3prognosticstudiesinwhich
an unexpected predictoroutcome relation was
foundduringthedevelopmentoftheprognostic

2013 Canadian Medical Association or its licensors

model. Using these studies, we illustrate causes


andsolutionsforunexpectedfindingsinclinical
predictionresearch.Forillustrationpurposes,we
use selective samples of the original data; the
validityoftheoriginalmodelsisnotquestioned
inanyway.
Example 1. Metabolic acidosis in neonates
Metabolicacidosisinneonatesisassociatedwith
several short- and long-term complications,
including death.Westerhuis and colleagues 6
developedapredictionmodeltoidentifyasearly
as possible women at risk of giving birth to a
childwithmetabolicacidosis,whichcanbethe
result of a lack of oxygen in the fetus. Because
an elevated maternal body temperature leads to
more oxygen consumption, it was unexpected
thatahighermaternalbodytemperatureactually
reducedtheriskofneonatalmetabolicacidosis.6

Competing interests:
Karel G.M.Moonsinstitutionhasreceivedfunds
fromGlaxoSmithKline,
BayerandBoehringer
Ingelheim.Noothercompetinginterestsdeclared.
Thisarticlehasbeenpeer
reviewed.
Correspondence to:
EwoudSchuit,
e.schuit@umcutrecht.nl
CMAJ 2013. DOI:10.1503
/cmaj.120812

Example 2. Diagnosing deep vein


thrombosis
Deepveinthrombosisisaseriousconditionwith
potentiallylethalcomplications,suchaspulmonary
embolism.Thecommonpracticeistodiagnose
deepveinthrombosiswithultrasonography,which
requiresreferraltoaradiologydepartment.Oudega
andcolleagues7 developedapredictionmodelto
diagnosedeepveinthrombosis.Ingeneral,deep

Key points

Unexpected predictoroutcome associations may suggest that the


prediction model is invalid and has poor generalizability.

Possible causes for unexpected findings in prediction research include


chance, misclassification, selection bias, mixing of effects
(confounding), intervention effects and heterogeneity.

The type of design or analytical method used to address an unexpected


finding depends on the cause of the unexpected finding.

Researchers should report unexpected findings in prediction models,


including the potential causes of the unexpected findings and the
attempts undertaken to solve them, to improve the potential uptake
and impact of the model.

CMAJ, July 9, 2013, 185(10)

E499

Analysis
veinthrombosisislesscommonamongmenthan
women.8,9 Therefore,theobservationthatmalesex
increasedtheprobabilityofadiagnosisofdeep
veinthrombosiswasunexpected.7

blooddonationsinthepast2yearswasexpected
toincreasethechanceoflowhemoglobinlevels
but,unexpectedly,loweredtheprobabilityoflow
hemoglobinlevels.

Example 3. Anemia in whole-blood


donors
Topreventirondeficiencyindonorsafterblood
donationandtoguaranteehighqualityofdonor
blood,theironstatusofblooddonorsisassessed
before donation by measuring hemoglobin levels. Deferrals are demoralizing for donors and
increase the risk of subsequent donor lapse.
Baart and colleagues10 developed 2 sex-specific
models to predict low hemoglobin levels in
whole-blooddonors.Thetotalnumberofwhole-

Causes and solutions


Causes for unexpected findings in prediction
researchincludechance,misclassificationofthe
predictor,selectionbias,mixingofeffects(confounding),interventioneffectsandheterogeneity
(Table 1). One rigorous solution that appeals to
allcausesforunexpectedfindingsistodeletethe
predictor with the unexpected finding from the
model. However, this is undesirable as it likely
reducesthepredictiveabilityoftheclinicalpre-

Table 1: Causes of and solutions for unexpected findings in prediction research


Cause and description

Solutions

Chance
Owing to chance, the direction of the
predictoroutcome relation can be
unexpected, especially when samples are
small.

Misclassification
Unexpected findings owing to
misclassification can occur when a predictor is
measured or coded with error, the predictor
outcome relation is modelled incorrectly or
2 or more variables are included even though
they are collinear.

E500

Advanced predictor-selection strategies


Proper design of prediction-model studies
with suitable sample size calculation
Delete predictor from the model
Redo the measurement/reclassify
Delete erroneous value (if known) and
impute
Model predictor on its continuous scale and
consider nonlinear trends
Include strongest predictor of 2 collinear
variables
Combine 2 collinear variables into 1 predictor
Delete predictor from the model

Selection
An unexpected finding occurs when selection
is related to both the predictor and the
outcome, either at inclusion, during followup or during the outcome assessment
(Figure 1).

Apply weighting to undo selection process


Add participants to undo the selection
process
Clearly define the domain in which the model
is applicable
Delete predictor from the model

Mixing of effects (confounding)


When 2 causes of the outcome are mutually
related, the observed effect of one can be
mixed up with the effect of the other,
potentially resulting in an unexpected finding
(Figure 1).

Add extra variable to the model


Delete predictor from the model

Intervention effects
A predictor value can trigger a medical
intervention, which subsequently lowers the
probability of the outcome, thereby
attenuating the observed relation between
the predictor and the outcome (Figure 1).

Add intervention to the model


Delete predictor from the model

Heterogeneity
Predictor effects may differ across subgroups
(i.e., interaction or heterogeneity of
predictor effects). If the distribution of the
subgrouping factor in the study population
differs from the distribution of this factor in
the typical patient population, this may lead
to an unexpected finding.

Add interaction term to the model


Delete predictor from the model

CMAJ, July 9, 2013, 185(10)

Analysis
diction model11 and reduces face validity of the
model.Hereafter,wewilldiscussthecausesand
solutionsdepictedinTable1andillustratethese
usingtheaforementionedclinicalexamples.
Chance
Thedirectionofthefound(estimated)predictor
outcome relation may be opposite from the
anticipated direction merely by chance. For
example, the observed relation between gestational age and neonatal metabolic acidosis
(example 1) was observed to be an odds ratio
(OR) of 1.17 (95% confidence interval [CI]
1.091.27),whichisinlinewithclinicalexperience(i.e.,expected)becausethechanceofmetabolic acidosis increases with increasing gestationalage.However,ifwetakerandomsamples
fromthisdataset,bychancewemayobservean
opposite relation. For example, we took 1000
random samples of size 50, 100, 250, 500 or
1000 participants from this same data source.
Among these samples, the proportion of unexpectedfindings(OR<1)decreasedwithincreasing sample size: 37.3%, 33.5%, 25.4%, 18.3%
and 11.0%, respectively. Hence, the probability
of observing an unexpected finding of the
predictoroutcome relation by chance strongly
dependsonsamplesize.
Misclassification
The status of a predictor may be measured or
codedwitherror(e.g.,codingwomenwithintrapartum fever as women without and vice versa)
and hence may lead to incorrect classification
(i.e., predictor misclassification). Furthermore,
thepredictoroutcomerelationmaybemodelled
incorrectly,forexample,whenanincorrecttransformationisusedforacontinuouspredictor(e.g.,
linearinsteadofnonlinear)orwhenacontinuous
variable is categorized.3,12 Finally, collinearity of
variablesmayresultinapparentmisspecification.
Collinearityariseswhen2ormorepredictorsare
highly correlated and so explain similar components of the variability in patient outcome. For
example,bodymassindexandweightarebydefinition strongly correlated. Including these predictorstogethercanleadtopoorestimationofthe
individual predictor estimates, in particular an
inflatedstandarderrorandlowpower,whichmay
increase the possibility of unexpected findings.
However, in terms of predictive accuracy of the
overallmodel,collinearitywillusuallynotaffect
performanceaslongasthecollinearitiesinfuture
data are similar to those identified in the data
usedtodevelopthemodel.
Possible solutions for misclassification of a
predictor include redoing the measurement (if
possible)andmodellingthecontinuouspredictor

appropriately(e.g.,bysplinesorfractionalpolynomials).13 Alternatively,ifitisknownwhichvalues are measured with significant error, these


could be deleted and imputed.14 In the case of
collinearity,optionsincludeomittingsomeofthe
affected predictor variables from the model, or
combiningthemintoasinglevariable(e.g.,mean
arterial pressure instead of systolic and diastolic
blood pressure) by adding or summing them.
However, it may be entirely sensible to include
collinear predictors together in the model to
improve the overall predictive accuracy. In this
situation, an important recommendation is to
interpretanycollinearpredictorsincombination,
ratherthanseparately.Forexample,inamodelin
whichthehighlycollinearvariablesageandage
squaredarebothincluded,oneshoulddiscussthe
quadraticrelationduetoageandnotfocusonthe
individual estimates for age or age squared.A
more extreme solution is to adopt a different
regression technique, such as ridge regression,
butthisitselfmayleadtobiasedpredictoreffect
estimatesandmakethemodelhardtointerpret.
Misclassificationwasapotentialcauseofthe
unexpectedfindingintheexampleofpredicting
risk of metabolic acidosis in neonates (example 1).Afterreclassificationofintrapartumfever
from 37.8C to 38.5C, the initial unexpected
finding in the predictor outcome relation (OR
0.86[95%CI0.681.08])disappeared(OR1.43
[95%CI0.992.08]).6 Thechoiceoftemperature
(thresholdvalue)todefinefeveristhusinfluentialinthedirectionofthepredictoreffect,anda
better approach may be to analyze temperature
asacontinuouspredictor.
Selection bias
Ifthestudypopulationisaselectivesamplefrom
the total patient population (domain), this may
resultinbiasedestimates,forexample,whenthe
selectionisrelatedtoboththepredictorandthe
outcome. Selection bias can occur at different
phasesduringastudy,forexample,atinclusion
(e.g., index event bias), during follow-up (e.g.,
selectivedropout)orduringthemeasurementof
the outcome at interest (e.g., when not all
patientsundergothesamereferencetest,referred
toasdifferentialverification).15
The mechanism resulting in selection bias is
schematicallyshowninFigure1.Ifboththepredictor and the outcome (possibly through a
symptom of the disease of interest) affect the
probabilityofselectingaparticipantforastudy,
this may induce a bias (i.e., selection bias).A
possible solution for selection bias is to apply
weighting, in which a subgroup can be given
extra weight to compensate for possible underrepresentation.16 Theextentoftheunderrepresen-

CMAJ, July 9, 2013, 185(10)

E501

Analysis
tation, however, is typically unknown, and the
weights will therefore depend on unverifiable
assumptions.16 Another solution is to clearly
definethedomaininwhichthemodelisapplicable (e.g., only among patients with suspected
deepveinthrombosisinsecondarycare).
Inthediagnosticstudyondeepveinthrombosis(example2),menunexpectedlyhadahigher
probabilityofadiagnosisofdeepveinthrombosisthanwomen(OR1.84[95%CI1.412.40]).
This may be owing to an overrepresentation of
women without deep vein thrombosis in the
studypopulation.Iffemalesexisariskfactorfor
deep vein thrombosis, primary care physicians
maysuspectdeepveinthrombosismoreoftenin
women than in men, and consequently more
women without deep vein thrombosis might be
referred to secondary care. If we assume that
womenweretwiceaslikelytobeincludedinthe
studycomparedwithmen,theunexpectedfinding disappears by weighting these overrepresented women without deep vein thrombosis
with 1/2 (i.e., 1 divided by the likelihood of
inclusion in the study) in the multivariable
regressionmodel(OR0.92[95%CI0.621.36]).
The problem of selection bias can be
extendedtometa-analyses,whichmayhavepub-

licationandselective-reportingbiases.Anexample is the meta-analysis ofTandon and colleagues, 17 which found that the presence of
mutantp53tumoursuppressorgeneisprognostic for disease-free survival (hazard ratio [HR]
0.45[95%CI0.270.74])butnotforoverallsurvival (HR 1.09 [95% CI 0.602.81]) in patients
presentingwithsquamouscellcarcinomaarising
fromtheoropharynxcavity.Ofthetotal6studies
includedinthemeta-analysis,allreportedonthe
prognosticeffectofp53foroverallsurvival,but
only 3 for disease-free survival. Results on
disease-free survival were reported only when
deemed prognostic; thus, there appears to be a
selective availability of data, leading to unexpected findings.18 A possible solution to this
problemistoperformabivariatemeta-analysis,
which synthesizes both outcomes jointly and
accounts for their correlation,19 to reduce the
impactofmissingdisease-freesurvivalresultsin
3studiesbyborrowingstrengthfromtheavailable overall survival results.A bivariate metaanalysisgavesimilaroverallsurvivalconclusions
but gave an updated summary HR for diseasefree survival of 0.76 (95% CI 0.401.42), indicating no significant evidence that p53 is prognosticfordisease-freesurvival.18

Selection bias
Predictor

Outcome

If both the predictor and the outcome


(possibly through a symptom of the
disease of interest) affect the probability
of selecting a participant for a study, this
may induce a selection bias

Symptom
Selection

Mixing of effects (confounding)


Predictor

Outcome

Confounder

Intervention effects
Predictor

Outcome

Intervention

When a variable is related to both the


predictor and the outcome, is not in the
causal pathway and is not included in the
model, the observed predictoroutcome
relation is the combined effect of the
predictor and the omitted variable

The observed predictoroutcome relation


becomes a combination of the direct
effect of the predictor on the outcome
and the indirect effect through the
intervention

Figure 1: Directed acyclic graphs of scenarios that may result in unexpected findings in prediction research.

E502

CMAJ, July 9, 2013, 185(10)

Analysis
Mixing of effects (confounding)
When2causesofadiseasearemutuallyrelated,
theobservedeffectofonecanbemixedupwith
the effect of the other. In causal research, this
phenomenonisreferredtoasconfounding.Similarly, if 2 factors are mutually related (e.g.,
smoking and alcohol consumption) and one is
causalfortheoutcomebuttheotherisnot,then
exclusionofthecausalfactorwouldleadtothe
noncausalfactorhavingastrongpredictoreffect
unexpectedly. For example, omitting smoking
from a prediction model for lung cancer would
lead to alcohol consumption unexpectedly predictinglungcancerrisksimplybecauseitisconfounded by smoking (those who smoke more
tend to drink more). Because in prognostic
research, the interest lies in the joint predictive
accuracy of multiple predictors, confounding is
usually not deemed relevant. 20 However, the
mechanism is the same in both descriptive and
causalresearch:whenomittingfromthemodela
variable that is related to both an included predictorandtheoutcome,theobservedpredictor
outcome relation is the combined effect of the
includedpredictorandtheomittedvariable(Figure 1). Consequently, an unexpected finding of
thepredictoroutcomerelationcanbeobserved.
Thepotentialformixingofeffectsisspecificto
the design and population. Hence, mixing of
effectslikelyaffectsgeneralizationofthemodel
tootherpopulations.Anobvioussolutionwould
betoincludethevariablethatwasinitiallyomitted from the prediction model; however, this is
impossiblewhenthevariableisnotobserved.
Mixingofeffectswasobservedintheexampleofpredictinganemiainwhole-blooddonors
(example 3).A lower risk of low hemoglobin
levels was found when the number of wholeblood donations in the past 2 years increased
(OR 0.92 [95% CI 0.900.93]), also known as
the healthy donor effect, which was corrected
by including the recent history of hemoglobin
level to the model (OR 1.00 [95% CI 0.98
1.02]).The most recent historic value of hemoglobinlevelisrelatedtoboththecurrenthemoglobinlevelandthecurrentriskofanemia,and
isthusaconfounder.
Intervention effect
Predictorvaluesmayguidethedecisiontostarta
medical intervention. If effective, this interventionthenlowerstheprobabilityoftheoutcome,
thusattenuatingtheobservedpredictoroutcome
relation. Similar to the mixing of effects, the
overallobservedrelationisacombinationofthe
directeffectofthepredictorontheoutcomeand
theindirecteffectthroughtheintervention(Figure1).However,expectationsofthedirectionof

the predictoroutcome relation apply to the


direct effect.Theoretically, the overall observed
predictoroutcome relation could even be the
reverseofthedirecteffectbetweenpredictorand
outcome (in the case of an extremely effective
intervention), thereby leading to an unexpected
finding.Without further consideration, it seems
unlikely that an intervention reduces the risk of
the outcome to a level that is even lower than
observedinagroupwhodidnthavetheindicationandthereforedidntreceivetheintervention.
Thesolutiontodealwithanunexpectedfinding due to an intervention effect would be to
includetheinterventioninthepredictionmodel.21
Clearly, this is not possible if everyone in the
studyhasthesameintervention.Inthatcase,itis
likely that the unexpected finding actually has
another cause than an intervention effect. If an
intervention is equally effective in all patients,
modelling the intervention effect doesnt require
aninteractionbetweenpredictorandintervention
inthemodel.Iftheinterventionismoreeffective
in, for example, those having the predictor, then
aninteractionbetweeninterventionandpredictor
isrequired(seeHeterogeneity).
In the prediction of metabolic acidosis in
neonates(example1)therecouldbeaninterventioneffectpresentowingtocesareandelivery.An
unexpectedfindingwasobservedfortherelation
betweenintrapartumfeverandmetabolicacidosis(OR0.86[95%CI0.681.08]).Uponinclusion of cesarean delivery in the model, intrapartumfeverwaspositivelyrelatedtometabolic
acidosis (OR 1.08 [95% CI 0.861.34]), which
wasinlinewithexpectations.
Heterogeneity
Theeffectofapredictormaydifferacrosssubgroupsofpatients.Thisisreferredtoasadifferentialpredictoreffect,interaction,effectmodification or heterogeneity of the predictor.When
heterogeneityisnotaccountedforintheprediction model, the observed predictor effect is a
(weighted) average of predictor effects within
thedifferentsubgroups.Ifthepredictoroutcome
relations across subgroups are opposite, the
directionoftheobservedrelationdependsonthe
proportional contributions of the subgroups.
Expectations are likely based on the majority
subgroup in a typical patient population, which
isnotnecessarilythemajorityinthestudypopulation.Hence,heterogeneityofapredictoreffect
canleadtounexpectedfindingsifnotaccounted
for.This differs from selection bias, in that the
relative size of the subgroups is not related to
bothpredictorandoutcome.Theprincipalsolution to deal with heterogeneity is to include an
interactionterminthemodel.

CMAJ, July 9, 2013, 185(10)

E503

Analysis
Heterogeneity is an unlikely cause for an
unexpected finding in prediction models. First,
heterogeneity that results in genuinely opposite
directionofeffectsisrareinepidemiology.Second,itseemsunrealistictoassumethatthegroup
ofpatientswhoaretypicallythemajorityrepresent only the minority of patients in a specific
studypopulation.
Intheprognosticmodelofmetabolicacidosis
in neonates (example 1), the effect of intrapartum fever on metabolic acidosis (OR 0.86
[95%CI0.681.08])wasunexpected.Alongside
the impact of misclassifying the presence of
fever (see Misclassification), this unexpected
finding could also have been the result of an
interaction between intrapartum fever and
epidural analgesia among women who received
epidural analgesia (OR 0.47 [95% CI 0.35
0.64])versuswomenwithoutepiduralanalgesia
(OR3.16[95%CI2.164.64]).

Discussion
Afirststepinevaluatingthevalidityofaclinical
prediction model is to check whether the direction of the predictoroutcome relation is as
expected.Weidentified6causesforunexpected
findings: chance, misclassification, selection,
mixing of effects (confounding), intervention
effects and heterogeneity.The aforementioned
causesforunexpectedfindingscanoccursimultaneously. In that instance, finding the reasons
for the unexpected finding will become more
complicated, yet the solutions described still
holdandcanbeappliedsimultaneously.
Themajorproblemofanunexpectedfinding
in prediction research is that it may hamper the
generalizability of a prediction model. Even
though the performance of the model may be
good in the population in which the model was
developed, it will probably be weaker when
appliedtoadifferentsettingorpopulation,indicatingpoorgeneralizability.Hence,despitehigh
methodologicstandardsusedinthedevelopment
ofthemodelitwillnot(i.e.,notwithoutfurther
adjustments3,22)beapplicableoutsidethepopulationinwhichitwasdeveloped.Itisthereforeof
utmostimportancetosignalunexpectedfindings.
Whenthedirectionofapredictoroutcomerelationiswell-establishedinboththeliteratureand
inclinicalexperience,itiseasytoidentifyunexpected (or incorrect) findings.Things become
complicated when there is no pre-existing
knowledgeanditisunknownwhatdirectionisto
beexpected.Then,onehastomakeassumptions
ontherelation,andthereforeitiscalledanunexpectedfindingratherthananincorrectfinding.A
moresubtleunexpectedfindingoccurswhenthe

E504

CMAJ, July 9, 2013, 185(10)

direction of a predictoroutcome association is


as expected, but the magnitude of the effect is
larger or smaller than expected. Still, the solutions proposed in this article could be used to
solvethisproblem.
Theexamplesalsoshowthatunexpectedfindingsinpredictionresearcharenotonlytheoreticallychallengingbutareaphenomenonthatcan
occur in any field of prediction research.When
confronted with an unexpected finding, one
shouldevaluatethedifferentreasonsforanunexpected finding (i.e., chance, misclassification,
selection bias, mixing of effects, heterogeneity
effects, intervention effects or a combination of
these).DirectedacyclicgraphsasshowninFigure 1 may help to identify possible causes.As
mentioned, heterogeneity or an intervention
effect rarely result in unexpected findings, but
the clinical examples illustrate that it can occasionallyhappen.Whenanunexpectedfindingis
observed, it is more likely that it results from
chance, misclassification, selection bias or mixingofeffects.
The potential for unexpected findings may
differ between study designs. For example,
incorrectly conducted casecontrol studies may
bemorepronetoselectionbiasthancohortstudies. Furthermore, mixing of effects becomes
more likely when using retrospective, routinely
collected health care registry data, in which the
numberanddetailofobservedpatientcharacteristicsaretypicallylimited.
Inmultivariablepredictionmodels,theproblemofunexpectedfindingsislikelytobesmaller
than in the univariable examples shown in this
paper,becausemixingofeffectsandintervention
effectsareaccountedforbyaddingtheappropriatecovariatestothemodel.
Inconclusion,unexpectedfindingsofthepredictoroutcomerelationcanoccurinanykindof
predictionresearch,andlikelyhampergeneralizabilityandpotentialuptakeofthemodelforclinical use. Researchers are encouraged to give
explanationsforpossibleunexpectedfindingsin
their prediction model, including the causes as
well as the attempts undertaken to solve the
problem, using the proposed framework for
causes and solutions for unexpected findings in
predictionresearch.
References
1. HlatkyMA,GreenlandP,ArnettDK,etal.Criteriaforevaluation of novel markers of cardiovascular risk: a scientific statement from theAmerican HeartAssociation. Circulation 2009;
119:2408-16.
2. ReillyBM,EvansAT.Translatingclinicalresearchintoclinical
practice:impactofusingpredictionrulestomakedecisions.Ann
Intern Med 2006;144:201-9.
3. SteyerbergEW.Clinical prediction models; a practical approach
to development, validation, and updating. NewYork (NY):
Springer;2009.p.1-7.

Analysis
4. DAgostinoRBSr,VasanRS,PencinaMJ,etal.Generalcardiovascular risk profile for use in primary care: the Framingham
HeartStudy.Circulation 2008;117:743-53.
5. MoonsKG,AltmanDG,VergouweY,etal.Prognosisandprognosticresearch:applicationandimpactofprognosticmodelsin
clinicalpractice.BMJ 2009;338:b606.
6. WesterhuisME,SchuitE,KweeA,etal.Predictionofneonatal
metabolicacidosisinwomenwithasingletontermpregnancyin
cephalicpresentation.Am J Perinatol 2012;29:167-74.
7. Oudega R, Moons KG, HoesAW. Ruling out deep venous
thrombosis in primary care.A simple diagnostic algorithm
includingD-dimertesting.Thromb Haemost 2005;94:200-5.
8. NaessIA,ChristiansenSC,RomundstadP,etal.Incidenceand
mortality of venous thrombosis: a population-based study.
J Thromb Haemost 2007;5:692-9.
9. Oger E. Incidence of venous thromboembolism: a communitybased study inWestern France. EPI-GETBP Study Group.
Groupe dEtude de laThrombose de Bretagne Occidentale.
Thromb Haemost 2000;83:657-60.
10. BaartAM,deKortWL,AtsmaF,etal.Developmentandvalidationofapredictionmodelforlowhemoglobindeferralinalarge
cohortofwholeblooddonors.Transfusion 2012;52:2559-69.
11. JanssenKJ,DondersAR,HarrellFEJr,etal.Missingcovariate
data in medical research:To impute is better than to ignore.
J Clin Epidemiol 2010;63:721-7.
12. RoystonP,AltmanDG,SauerbreiW.Dichotomizingcontinuous
predictorsinmultipleregression:abadidea.Stat Med 2006;25:
127-41.
13. SauerbreiW, Royston P. Building multivariable prognostic and
diagnosticmodels:transformationofthepredictorsbyusingfractionalpolynomials.J R Stat Soc Ser A Stat Soc 1999;162:71-94.
14. ColeSR,ChuH,GreenlandS.Multiple-imputationformeasurement-errorcorrection.Int J Epidemiol 2006;35:1074-81.
15. deGrootJA,DendukuriN,JanssenKJ,etal.Adjustingfordifferential-verification bias in diagnostic-accuracy studies: a
Bayesianapproach.Epidemiology 2011;22:234-41.
16. Hernn MA, Hernandez-Diaz S, Robins JM.A structural
approachtoselectionbias.Epidemiology 2004;15:615-25.
17. TandonS,Tudur-SmithC,RileyRD,etal.Asystematicreview
ofp53asaprognosticfactorofsurvivalinsquamouscellcarcinomaofthefourmainanatomicalsubsitesoftheheadandneck.
Cancer Epidemiol Biomarkers Prev 2010;19:574-87.
18. JacksonD,RileyR,WhiteIR.Multivariatemeta-analysis:potentialandpromise.Stat Med 2011Jan.26[Epubaheadofprint].
19. Riley R. Multivariate meta-analysis: the effect of ignoring

within-studycorrelation.JRSS Series A 2009;172:789-811.


20. BrotmanDJ,WalkerE,LauerMS,etal.Insearchoffewerindependentriskfactors.Arch Intern Med 2005;165:138-45.
21. VandenBoschJE,MoonsKG,BonselGJ,etal.Doesmeasurementofpreoperativeanxietyhaveaddedvalueforpredictingpostoperativenauseaandvomiting?Anesth Analg 2005;100:1525-32.
22. JanssenKJ,VergouweY,KalkmanCJ,etal.Asimplemethodto
adjust clinical prediction models to local circumstances. Can J
Anaesth 2009;56:194-201.

Affiliations: FromtheJuliusCenterforHealthSciencesand
PrimaryCare(Schuit,Groenwold,Moons),UniversityMedicalCenterUtrecht,Utrecht,theNetherlands;theDepartment
of Biostatistics (Harrell),Vanderbilt University School of
Medicine,Nashville,Tenn.;theSchoolofHealthandPopulation Sciences (Riley), University of Birmingham, Birmingham, UK; the Sanquin Blood Bank (de Kort), Southeast
Region, Nijmegen, the Netherlands; the Department of
ObstetricsandGynecology(Kwee),UniversityMedicalCenterUtrecht,Utrecht,theNetherlands;andtheDepartmentof
ObstetricsandGynecology(Mol),AcademicMedicalCenter,
Amsterdam,theNetherlands.RichardD.Rileyisastatistical
editorfortheBMJ.
Contributors: Ewoud Schuit, Rolf H.H. Groenwold, Frank
E.HarrellJr.,RichardD.RileyandKarelG.M.Moonsconceived and designed the study. Ewoud Schuit analyzed the
data.Wim L.A.M. de Kort,Anneke Kwee, BenWillem J.
MolandKarelG.M.Moonscontributeddataoftheclinical
examples. Ewoud Schuit and Rolf H.H. Groenwold drafted
the manuscript, which all of the authors revised.All of the
authorsapprovedthefinalversionsubmittedforpublication.
Funding: Karel G.M. Moons is funded by the Netherlands
Organization for Scientific Research (grant 918.10.615 and
9120.8004).RichardD.Rileyissupportedbyfundingfrom
theMRCMidlandsHubforTrialsMethodologyResearchat
the University of Birmingham (Medical Research Council
Grant ID G0800808).The funders had no role in the study
design, data collection and analysis, decision to publish or
preparationofthemanuscript.

CMAJ, July 9, 2013, 185(10)

E505

Você também pode gostar