Você está na página 1de 54

Sources of Bias in Randomised

Controlled Trials
REMEMBER

Randomised Trials are the BEST


way of establishing effectiveness.
All RCTs are NOT the same.
 Although the RCT is rightly regarded as
the premier research method, by the
cognoscenti, some trials are better than
others.
 In this lecture we will look at sources of
bias in trials and how these can be
avoided.
Selection Bias - A reminder
 Selection bias is one of the main threats
to the internal validity of an experiment.
 Selection bias occurs when participants
are SELECTED for an intervention on
the basis of a variable that is associated
with outcome.
 Randomisation or other similar methods
abolishes selection bias.
After Randomisation
 Once we have randomised participants
we eliminate selection bias but the
validity of the experiment can be
threatened by other forms of bias,
which we must guard against.
Forms of Bias
 Subversion Bias
 Technical Bias
 Attrition Bias
 Consent Bias
 Ascertainment Bias
 Dilution Bias
 Recruitment Bias
Bias (cont)
 Resentful demoralisation
 Delay Bias
 Chance Bias
 Hawthorne effect
 Analytical Bias.
Subversion Bias
 Subversion Bias occurs when a
researcher or clinician manipulates
participant recruitment such that groups
formed at baseline are NOT equivalent.
 Anecdotal, or qualitative evidence (I.e
gossip), suggest that this is a
widespread phenomenon.
 Statistically this has been demonstrated
as having occurred widely.
Subversion - qualitative
evidence
 Schulz has described, anecdotally, a
number of incidents of researchers
subverting allocation by looking at
sealed envelopes through x-ray lights.
 Researchers have confessed to breaking
open filing cabinets to obtain the
randomisation code.
Schulz JAMA 1995;274:1456.
Quantitative Evidence
 Trials with adequate concealed
allocation show different effect sizes,
which would not happen if allocation
wasn’t being subverted.
 Trials using simple randomisation are
too equivalent for it to have occurred by
chance.
Poor concealment
 Schulz et al. Examined 250 RCTs and
classified them into having adequate
concealment (where subversion was
difficult), unclear, or inadequate where
subversion was able to take place.
 They found that badly concealed
allocation led to increased effect sizes –
showing CHEATING by researchers.
Comparison of concealment

Allocation Effect Size


Concealment OR
Adequate 1.0
Unclear 0.67 P < 0.01
Inadequate 0.59

Schulz et al. JAMA 1995;273:408.


Small VS Large Trials
 Small trials tend to give greater effect
sizes than large trials, this shouldn’t
happen.
 Kjaergard et al, showed it was due to

poor allocation concealment in small


trials, when trials are grouped by
allocation methods ‘secure’ allocation
reduced effect by 51%.
Kjaegard et al. Ann Intern Med 2001;135:982.
Case Study
 Subversion is rarely reported for
individual studies.
 One study where it has been reported
was for a large, multicentred surgical
trial.
 Participants were being randomised to
5+ centres using sealed envelopes.
Case study cont
 Subversion was detected and the trial
changed to telephone allocation system.
Case-study (cont)
 After several hundred participants had
been allocated the study statistician
noticed that there was an imbalance in
age.
 This age imbalance was occurring in 3
out of the 5 centres.
 Independently 3 clinical researchers
were subverting the allocation
Mean ages of groups
Clinician Experimental Control
All p < 0.01 59 63
1 p =.84 62 61
2 p = 0.60 43 52
3 p < 0.01 57 72
4 p < 0.001 33 69
5 p = 0.03 47 72
Others p = 0.99 64 59
Example of Subversion
30
Envelope Number

25
20
15
10
5
0
11
13
15
17
19
21
23
25
1
3
5
7
9

Recruitment Sequence
Using Telephone Allocation
Clinician Experimental Control
All p = 0.37 59 57
1 p =.62 57 57
2 p = 0.24 60 51
3 NA 61 70
4 p =0.99 63 65
5 p = 0.91 57 62
Others p = 0.99 59 56
Subversion - summary
 Appears to be widespread.
 Secure allocation usually prevents this
form of bias.
 Need not be too expensive.
 Essential to prevent cheating.
Secure allocation
 Can be achieved using telephone
allocation from a dedicated unit.
 Can be achieved using independent
person to undertake allocation.
Technical Bias
 This occurs when the allocation system
breaks down often due a computer
fault.
 A great example is the COMET I trial
(COMET II was done because COMET 1
suffered bias).
COMET 1
 A trial of two types of epidural
anaesthetics for women in labour.
 The trial was using MIMINISATION via a
computer programme.
 The groups were minimised on age of
mother and her ethnicity.
 Programme had a fault.
COMET Lancet 2001;358:19.
COMET 1 – Technical Bias
AGE Traditional Combined Low dose
Total 388 335 331
<25 years 13 179 173
(3%) (53%) (52%)
COMET II
 This new study had to be undertaken
and another 1000 women recruited and
randomised.
 LESSON – Always check the balance of
your groups as you go along if
computer allocation is being used.
Attrition Bias
 Usually most trials lose participants after
randomisation. This can cause bias,
particularly if attrition differs between groups.
 If a treatment has side-effects this may make
drop outs higher among the less well
participants, which can make a treatment
appear to be effective when it is not.
Attrition Bias
 We can avoid some of the problems
with attrition bias by using Intention to
Treat Analysis, where we keep as many
of the patients in the study as possible
even if they are no long ‘on treatment’.
Sensitivity analysis
 Analysis of trial results can be subjected
to a sensitivity analysis whereby those
who drop out in one arm are assumed
to have the worst possible outcome,
whilst those who drop out in the parallel
arm are assumed to have the best
possible outcome. If the findings are
the same we are reassured.
Consent Bias
 This occurs when consent to take part
in the trial occurs AFTER randomisation.
 Most frequent danger in Cluster trials.
 For example, Graham et al, randomised
schools to a teaching package for
emergency contraception. More
children took part in the intervention
than the control.
Graham et al. BMJ 2002;324:1179.
Consent bias?
Intervention Control
N= 1768 N = 2026
% recruited 88% 83%
Knowledge 17% 21%
Consent Bias?
 Because more children consented in the
intervention group we would expect
their knowledge to be less (as we
include children less likely to know).
 Conversely we get a volunteer or
consent effect with the intervention
group only those most knowledgeable
agreeing to take part.
Ascertainment Bias
 This occurs when the person reporting
the outcome can be biased.
 A particular problem when outcomes
are not ‘objective’ and there is
uncertainty as to whether an event has
occurred.
Example.
 A group of student’s essays were randomly
assigned photographs purporting to be the
student. The photos were of people judged
to be “attractive” “average” “below average”.
The average mark was significantly HIGHER
for the average looking student.
 Why? Markers were biased into marking
higher for students whom they believed were
average looking (like themselves).
Another example
 Use of homeopathic dilution of histamine was
shown in a RCT of cell cultures to have
significant effects on cell motility.
 Ascertainment was not blind.
 Study repeated with assessors blind to which
petri dish had distilled water or which had
had homeopathic dilutions of histamine.
Effect, like snow in Arabian Desert,
disappeared.
Dilution Bias
 This occurs when the intervention or control
group get the opposite treatment. This
affects all trials where there is non-adherence
to the intervention.
 For example, in a trial of calcium and vitamin
D about 4% of the controls are getting the
treatment and 35% of the intervention group
stop taking their treatment. This will ‘dilute’
any apparent treatment effect.
Effect of dilution bias

Intervention Difference Between Control


1000 Groups is 1000
70 events 30 Events 100 events
30%

Intervention 1000 Difference Between Control 1000


1000 get treatment Groups is 200 get treatment
70 events 24 events Of 20 events 6 prevented
25% Now only 94 events
Sources of dilution
 Calcium and D trial controls buying
calcium supplements or intervention
patients not taking them.
 Hip protector trial control patients
MAKING their own padded knickers
from bubble wrap, intervention patients
not wearing them.
Dilution Bias
 This can be partly prevented by
refusing access to the experimental
treatment for the controls.
 Will always be a problem for active
treatment seeking control therapy.
Resentful Demoralisation
 This can occur when participants are
randomised to treatment they do not
want.
 This may lead to them reporting
outcomes badly in ‘revenge’.
 This can lead to bias.
Resentful Demoralisation
 One solution is to use a patient
preference design where only
participants who are ‘indifferent’ to the
treatment they receive are allocated.
 This should remove its effects.
Hawthorne Effect
 This is an effect that occurs by being
part of the study rather than the
treatment. Interventions that require
more TLC than controls could show an
effect due to the TLC than the drug or
surgical procedure.
 Placebos largely eliminate this or TLC
should be given to controls as well.
Delay bias
 This can occur if there is a delay
between randomisation and the
intervention.
 In the GRIT trial of early delivery some
women allocated to immediate delivery
were delayed. This will dilute the
effects of treatment.
Delay bias
 Similarly in Calcium and D trial delay of
months between allocation and receipt
of treatment.
 This can be dealt with, sometimes by
starting analysis for active and controls
from time of treatment received.
Chance Bias
 By chance groups can be uneven in
important variables due to chance.
 This can be reduced by stratification or
possibly better using ANCOVA.
 Stratification of course can lead to
TECHNICAL or SUBVERSION bias
Analytical Bias
 Once a trial has been completed and
data gathered in it is still possible to
arrive at the wrong conclusions by
analysing the data incorrectly.
 Most IMPORTANT is ITT.
 Also inappropriate sub-group analyses
is a common practice.
Intention To Treat
 Main analysis of data must be by
groups as randomised. Per protocol or
active treatment analysis can lead to a
biased result.
 Those patients not taking the full
treatment are usually quite different to
those that are and restricting the
analysis can lead to bias.
Sub-Group Analyses
 Once the main analysis has been
completed it is tempting to look to see
if the effect differs by group.
 Is treatment more or less effective in
women?
 Is it better or worse among older people?
 Is treatment better among people at
greater risk?
Sub-Groups
 All of these are legitimate questions.
The problem is the more subgroups one
looks at the greater is the chance of
finding a spurious effect.
 Sample size estimations and statistical
tests are based on 1 comparison only.
Sub-Group and example.
 In a large RCT of asprin for myocardial
infarction a sub-group analysis showed
that people with the star signs Gemini
and Libra asprin was INEFFECTIVE.
 This is complete NONSENSE!
 This shows dangers of subgroup
analyses.
Lancet 1988;ii:349-60.
More Seriously
 Sub group analyses led to:
 The wrong finding that tamoxifen was ineffective
among women < 50 years;
 Streptokinase was ineffective > 6 hours after MI.
 Asprin for secondary prevention in women is
ineffective.
 Antihypertensive treatment for primary prevention
in women is ineffective.
 Beta-blockers ineffective in older people.
 And so on……
Sub groups
 To avoid spurious findings these should
be pre-specified and based on a
reasonable hypothesis.
 Pre-specification is important avoid data
dredging as if you torture the data
enough it will confess.
Cluster Trial Analysis
 Cluster trials (groups of individuals)
need special statistical analysis.
 Standard methods (e.g. two sample t-
test), will not be appropriate.
 Often cluster trials are inappropriately
analysed which leads to spurious
precision.
Example
 Edinburgh breast screening trial
randomised GP practices to offer breast
screening or not.
 Design was cluster but analysis was by
individual (still didn’t manage to find a
significant effect).
Summary
 Despite the RCT being the BEST
research method unless expertly used it
can lead to biased results.
 Care must be taken to avoid as many
biases as possible.

Você também pode gostar