Você está na página 1de 10

Clinical Perspective

Interpreting Validity Indexes for Diagnostic Tests: An Illustration Using the Berg Balance Test
Physical therapists routinely make diagnostic and prognostic decisions in the course of patient care. The purpose of this clinical perspective is to illustrate what we believe is the optimal method for interpreting the results of studies that describe the diagnostic or prognostic accuracy of examination procedures. To illustrate our points, we chose the Berg Balance Test as an exemplar measure. We combined the data from 2 previously published research reports designed to determine the validity of the Berg Balance Test for predicting risk of falls among elderly people. We calculated the most common validity indexes, including sensitivity, specificity, predictive values, and likelihood ratios for the combined data. Clinical scenarios were used to demonstrate how we believe these validity indexes should be used to guide clinical decisions. We believe therapists should use validity indexes to decrease the uncertainty associated with diagnostic and prognostic decisions. More studies of the accuracy of diagnostic and prognostic tests used by physical therapists are urgently needed. [Riddle DL, Stratford PW. Interpreting validity indexes for diagnostic tests: an illustration using the Berg Balance Test. Phys Ther. 1999;79:939 948.]

Key Words: Diagnosis; Tests and measurements, general.

Daniel L Riddle Paul W Stratford


Physical Therapy . Volume 79 . Number 10 . October 1999 939

How can clinicians

hysical therapists routinely perform diagnostic tests on their patients. For diagnostic test results to be most useful, we contend that validity estimates from studies of the diagnostic test in question should be used to guide clinical decisions. The purpose of this perspective is to describe a conceptual model proposed by other authors1,2 for the application of validity indexes for diagnostic (or prognostic) tests to clinical practice. We use a clinical illustration to demonstrate how measures, which we refer to as validity indexes (ie, sensitivity, specificity, positive and negative predictive values, likelihood ratios), can be interpreted for individual patients. The illustration combines data from 2 studies on the use (validity) of the Berg Balance Test (BBT) for predicting risk of falls among elderly people aged 65 to 94 years.3,4 The illustration is meant only to demonstrate how validity indexes can be useful for practice and not necessarily to assist clinicians in the examination of patients suspected of having balance disorders. Studies that can be used to determine whether meaningful clinical inferences can be made based on diagnostic tests are classified as criterion-related validity studies.5 Criterion-related validity studies take 1 of 2 forms. Researchers can compare a clinical measure with a gold standard measure (ideally, a valid diagnostic test or a definitive measure of whether the condition of interest is truly present) obtained at about the same time as the measure being studied. In our illustration, the patients report of falling is considered the gold standard measure. In other cases, a gold standard measure may be a diagnosis made at the time of surgery or via an invasive diagnostic procedure. Studies in which some form of gold standard is obtained at about the same time as the diagnostic test being studied are commonly called concurrent criterion-related validity studies.5 Researchers can also compare a measures prediction of a future event with what actually happens to a patient in the future. These studies are commonly termed predictive criterion-related validity studies.5 Studies designed to estimate the risk of a future adverse event are often used by clinicians to make judgments about prognoses. For example, investigating whether the

BBT can be used to predict whether a person will fall in the studies to guide future is an illustraclinical decisions for tion of a predictive criterion-related validindividual patients? ity study. The gold standard for this type of study would be the subjects report of falls for a period of time following administration of the BBT.

use diagnostic test

The Berg Balance Test The BBT was designed to be an easy-to-administer, safe, simple, and reasonably brief measure of balance for elderly people. The developers expressed the hope that the BBT would be used to monitor the status of a patients balance and to assess disease course and response to treatment.6 Patients are asked to complete 14 tasks, and each task is rated by an examiner on a 5-point scale ranging from 0 (cannot perform) to 4 (normal performance). Elements of the test are supposed to be representative of daily activities that require balance, including tasks such as sitting, standing, leaning over, and stepping. Some tasks are rated according to the quality of the performance of the task, whereas the time taken to complete the task is measured for other tasks. The developers of the BBT provided operational definitions for each task and the criteria for grading each task. Overall scores can range from 0 (severely impaired balance) to 56 (excellent balance). Data exist to support the reliability of BBT scores obtained from elderly subjects.3,6,7 For example, Bogle Thorbahn and Newton3 reported an intertester reliability (Spearman rho) value of .88 for 17 subjects aged 69 to 94 years. Evidence also exists to support the content validity,6 construct validity,7,8 and criterion-related validity3,4,8 of test scores for inferring fall risk in elderly subjects tested in a variety of settings. Construct validity has been assessed using a variety of approaches. For example, construct validity was supported to the extent that BBT scores were shown to correlate reasonably well with other measures of balance (Pearson r .38 .91) and measures of motor performance (Pearson r

DL Riddle, PhD, PT, is Associate Professor, Department of Physical Therapy, Medical College of Virginia Campus, Virginia Commonwealth University, 1200 E Broad, Richmond, VA 23298-0224 (USA) (driddle@hsc.vcu.edu). Address all correspondence to Dr Riddle. PW Stratford, PT, is Associate Professor, School of Rehabilitation Science, and Associate Member, Department of Clinical Epidemiology and Biostatistics, McMaster University, Hamilton, Ontario, Canada. Concept, writing, and data analysis were provided by Riddle and Stratford. Consultation (including review of manuscript before submitting) was provided by Cheryl Ford-Smith, Susan Cromwell, Dr Roberta Newton, and Dr Anne Shumway-Cook. This article was submitted December 7, 1998, and was accepted July 7, 1999.

940 . Riddle and Stratford

Physical Therapy . Volume 79 . Number 10 . October 1999

.62.94).7,8 For example, the Pearson r correlation between the BBT and the balance subscale of the Tinetti Performance-Oriented Mobility Assessment9 was .91.8 The Pearson r correlation between the BBT and the Barthel Index mobility subscale10 was .67.8 The Illustration To illustrate how to interpret validity indexes, we have combined data from 2 studies3,4 designed to determine whether BBT scores could identify elderly people (age range 6594 years) who are at risk for falling. Subjects in both studies were of similar ages and had similar BBT scores, and the proportions of male and female subjects were also similar (Tab. 1). In both studies, the subjects reported whether they had fallen and the number of falls in the 6 months prior to being admitted to the study. In addition, for both studies, the authors appeared to use essentially the same definition for what constituted a fall. Bogle Thorbahn and Newton3 defined a fall as an unexpected contact of any part of the body with the ground. Shumway-Cook and colleagues4 defined a fall as any event that led to an unplanned, unexpected contact with a supporting surface. The 2 studies differed in 2 potentially important ways. First, Shumway-Cook et al4 excluded subjects with comorbidities that may have affected balance. Bogle Thorbahn et al3 did not exclude these types of subjects. Subjects in the study by Shumway-Cook and colleagues reported no comorbidities, whereas 38% of the subjects in the study by Bogle Thorbahn and Newton reported having diagnoses of neurological or orthopedic conditions. Second, subjects in the study by Shumway-Cook et al were required to have fallen at least twice in the previous 6 months, whereas subjects in the study by Bogle Thorbahn et al had to have fallen only once or more in the previous 6 months. It is unclear how these differences affected the validity estimates reported by these authors, but we believe the studies were similar enough to allow us to combine the data for the illustration in this article. It is also unclear why the proportion of fallers (50%) in the study by Shumway-Cook et al was much higher than the proportion of fallers (17%) in the study by Bogle Thorbahn and Newton. Diagnostic Test Methodology We believe that the subjects studied (the sample) should represent those types of patients who will be measured during clinical practice.11 In our illustration, the sample of subjects was elderly people (ages ranging from 65 to 94 years) living independently. Some patients will have the disorder of interest (using our illustration, some subjects reported falls), and some patients will not have the disorder of interest (some reported no falls). The test being studied (ie, the BBT) and the gold standard or criterion measure (ie, determination of whether the subject had

Table 1.
Characteristics of the Subjects Combined From Two Studies3,4 Study of ShumwayCook and colleagues4 (N 44)

Characteristic Age (y) X SD Range Sex (%) Male Female Berg Balance Test X SD Range Gold standard classification of fallers (%)

Study of Bogle Thorbahn and Newton3 (N 66)

76.1 6.6 6594

79.2 6.2 69 94

27 73

24 76

46.1 10.5 18 56 50

48.2 9.9 9 56 17

fallen in the past 6 months) are applied to all subjects, and the tests diagnostic accuracy (Tab. 2) is determined.9 The results from diagnostic accuracy studies are often summarized in a format similar to that shown in Table 2.1214 In this table, the terms condition present and condition absent are used to identify people who truly have or do not have the condition of interest (the gold standard test is either positive or negative). The letters a, b, c, and d are used to reference cells in the table, and the sums a b, c d, a c, b d, and a b c d denote marginal values. The cell values and marginal values are combined in various ways to calculate validity indexes. Definitions of terms related to diagnostic testing and formulas for the many validity indexes also are presented in Table 2. Sensitivity and Specificity Sensitivity indicates how often a diagnostic test detects a disease or condition when it is present. Sensitivity essentially tells the clinician how good the test is at correctly identifying patients with the condition of interest. Specificity indicates how often a diagnostic test is negative in the absence of the disease or condition. Specificity essentially tells the clinician how good the test is at correctly identifying the absence of disease.15 The closer the sensitivity or specificity is to 100%, the more sensitive or specific the test. The authors of both studies in our illustration reported the sensitivity and specificity of the BBT for determining current fall risk. Berg et al8 contended that the best way

Physical Therapy . Volume 79 . Number 10 . October 1999

Riddle and Stratford . 941

Table 2.
Two

Two Table, Formulas, and Definitions for Validity Indexesa Gold Standard Test Result Diagnostic Test Result (Condition Present) True Positive (a) False Negative (c) a c (Condition Absent) False Positive (b) True Negative (d) b d Total a b

c a

d b c d

Total

Sensitivity: Those people correctly identified by the test as having the condition of interest as a percentage of all those who truly have the condition of interest: [100% (a/[a c])]. Specificity: Those people correctly identified by the test as not having the condition of interest as a percentage of all those who truly do not have the condition of interest: [100% (d/[b d])]. False Positive Rate: Those people falsely identified by the test as having the condition of interest as a percentage of all patients without the condition of interest: [100% (b/b d)]. False Negative Rate: Those people falsely identified by the test as not having the condition of interest as a percentage of all patients with the condition of interest: [100% (c/[a c])]. Positive Predictive Value: Those people correctly identified by the test as having the condition of interest as a percentage of all those identified by the test as having the condition of interest: [100% (a/[a b])]. Negative Predictive Value: Those people correctly identified by the test as not having the condition of interest as a percentage of all those identified by the test as not having the condition of interest: [100% (d/[c d])]. Diagnostic Accuracy: The percentage of people who are correctly diagnosed: [100% (a d)/(a b c d)]. Prevalence: The percentage of people in a target population who truly have the condition of interest: [100% (a c)/(a b c d)]. Likelihood Ratio for a Positive Test: Is sensitivity divided by 1 specificity [{a/(a c)}/{b/(b d)}]. Likelihood Ratio for a Negative Test: Is 1 sensitivity divided by specificity [{c/(a c)}/{d/(b d)}]. Pretest Probability of the Disorder: The therapists estimate of the patients chance of having the disorder (condition of interest) prior to the therapist doing the test. It is usually estimated by the clinician based on prior knowledge and experience. Posttest Probability of the Disorder: The patients chance of having the condition of interest after the results of the test are obtained. a All definitions agree with the Standards for Tests and Measurements in Physical Therapy Practice.5 Definitions for sensitivity, specificity, false positive rate, false negative rate, positive predictive rate, and negative predictive rate are derived from the Standards for Tests and Measurements in Physical Therapy Practice.5 Definitions for diagnostic accuracy, prevalence, likelihood ratio for a positive test, likelihood ratio for a negative test, pretest probability of the disorder, and posttest probability of the disorder are derived from Sackett and colleagues.1,2

to interpret scores on the BBT is to use a single cutoff point of 45 to differentiate those at risk for falls (those with scores of 45) and those who are not at risk for falls (those with scores of 45). Using a cutoff point of 45, as recommended by Berg et al, the sensitivity for the data collected by Shumway-Cook and colleagues4 was 55% and the specificity was 95%. For the data collected by Bogle Thorbahn and Newton,3 the sensitivity was 82% and the specificity was 87%. When we combined the data from both studies, a cutoff point of 45 yielded a sensitivity of 64% and a specificity of 90% (Tab. 3). A sensitivity of 64% indicates that 64% of subjects who were true fallers had a positive BBT (a score of 45). That is, approximately a third of the subjects who were fallers were missed by the BBT. Although there are no agreed-on standards for judging sensitivity and specificity, we believe the sensitivity of 64% should generally be considered quite low because more than a third of the subjects were misclassified. A specificity of 90% indicates that 90% of subjects who were nonfallers had a negative BBT (a score of 45). That is, only 10% of the nonfallers were missed by the BBT. Specificity was much higher than sensitivity, indicating that the BBT does a better job of identifying subjects who are not fallers than subjects who are fallers.
942 . Riddle and Stratford

When we use diagnostic tests, we do not know who has the condition of interest and who does not have the condition of interest. That is, sensitivity and specificity have somewhat limited usefulness because they do not describe validity in the context of the test result.1 Rather, they describe validity in the context of the gold standard, a value we do not know when we do diagnostic tests. Sensitivity, for example, does not take into account the false positive test results (Tab. 2) on a group of patients. Stated another way, sensitivity does not describe how often patients with positive tests have the disorder of interest. Sensitivity only describes the proportion of patients with the disorder of interest who have a positive test. Similarly, specificity does not take into account false negative test results (Tab. 2). Specificity does not describe how often patients with negative tests do not have the disorder of interest. Specificity only describes the proportion of patients without the disorder of interest who have a negative test. Diagnostic testing, in our view, is used because clinicians want to know the probability of the condition existing. Because clinicians make decisions based on diagnostic test results and not necessarily on results of tests that are considered gold standards, some authors1 have contended that positive and negative predictive values (see
Physical Therapy . Volume 79 . Number 10 . October 1999

Table 3.
Sensitivity and Specificity for Four Cutoff Points of the Berg Balance Test (BBT) 2 2 Tables for Four BBT Cutoff Points Gold Standard for Cutoff of 45 Fall No Fall Gold Standard for Cutoff of 50 Fall No Fall Gold Standard for Cutoff of 55 Fall No Fall

BBT Cutoff Point

Gold Standard for Cutoff of 40 Fall 15 a No Fall b3 d 74

40 45 50 55 Sensitivity a/(a c) Specificity d/(b d)

18 c

21 a 12 c

b8 d 69 28 a 5c b 21 d 56 32 a 1c b 57 d 20

45% 96%

64% 90%

85% 73%

97% 26%

next section) are more important than sensitivity and specificity for clinical practice. Positive and Negative Predictive Values Before diagnostic testing, therapists usually have collected a variety of information (eg, medical history, some examination data) from the patient. Based on their knowledge, training, and experience, therapists can sometimes use these data, depending on what is known about various conditions, to estimate the probability the condition of interest is present. This is known as the pretest probability of the disorder.1 For example, if a therapist found that an elderly patient had a history of dizziness and required assistance with most activities of daily living, the therapist might anticipate that the patients risk of falling was quite high, say on the order of 60%. Because the therapist knew evidence existed to indicate that dizziness16 and difficulty with home activities of daily living17 increase fall risk, the therapist estimated the pretest probability for falls to be quite high. The pretest probability estimate of 60% is only an estimate and may contain some error. The therapist could then do a BBT to better estimate the patients risk of falling. Positive and negative predictive values describe the probability of disease after the test is completed. The probability of the condition of interest after the test result is obtained is also known as the posttest probability of the disorder.1 For many clinicians, the idea of estimating the probability of a disorder prior to doing a diagnostic test (pretest probability) may seem like a new or unusual concept. We believe that some clinicians, based on their experience and training, may use an ordinal-based scale estimate of pretest probability, such as the disease is highly likely,
Physical Therapy . Volume 79 . Number 10 . October 1999

somewhat likely, or not very likely given the patients signs and symptoms. In our view, however, using percentage estimates of pretest probability is not commonly done by most therapists. We suggest that therapists should make percentage estimates of the pretest probability of the disorder of interest. For example, if a clinician used an ordinal scale similar to the one just described, we contend that the clinician should convert it to a percentage estimate of pretest probability in the following way. If the pretest probability of the disorder were judged to be highly likely, this judgment could be converted to a 75% pretest probability, whereas a rating of somewhat likely could be converted to pretest probability of 50%. A rating of not very likely might be converted to a pretest probability of 25%. We believe that, as therapists become more comfortable with making percentage estimates of pretest probability, they will become more accurate, although we have no data to support this argument. By using percentage estimates for pretest probability, therapists can take full advantage of positive and negative predictive values (and likelihood ratios, to be discussed elsewhere in this article) reported in the literature. Several examples are discussed elsewhere in this article to illustrate how pretest probability can be estimated and how these estimates can influence the interpretation of the diagnostic test. Positive predictive value is the proportion of patients with a positive test who have the condition of interest.1 Negative predictive value is the proportion of patients with a negative test who do not have the condition of interest.1 The closer the positive predictive value is to 100%, the more likely the disease is present with a positive test finding. The closer the negative predictive value is to
Riddle and Stratford . 943

Table 4.
Validity Estimates for Several Different Cutoff Points of the Berg Balance Test Positive Predictive Value (95% CIa) 77% (54 100) 83% (66 100) 72% (56 88) 57% (4371) 36% (26 46) 30% (2139) Negative Predictive Value (95% CI) 67% (58 76) 67% (5777) 85% (7793) 92% (8599) 95% (86 100) 100% (5) Positive Likelihood Ratio (95% CI) 7.8 (2.326.4) 11.7 (3.6 37.6) 6.1 (3.0 12.4) 3.1 (2.1 4.6) 1.3 (1.11.5) 1.01 (11.04) Negative Likelihood Ratio (95% CI) 0.7 (0.6 0.9) 0.6 (0.4 0.8) 0.4 (0.3 0.6) 0.2 (0.1 0.5) 0.1 (0.02 0.8) Undefined

Berg Balance Test Result 35

Sensitivity (95% CI) 30% (14 46) 45% (28 62) 64% (48 80) 85% (7397) 97% (91100) 100% (91)

Specificity (95% CI) 96% (92100) 96% (92100) 90% (8397) 73% (63 83) 26% (16 36) 1% (0 3)

40

45

50

55

60
a

CI confidence interval.

100%, the more likely the disease is absent with a negative test finding. In our illustration, the combined data from both studies yielded a positive predictive value of 72% when using a cutoff point of 45 on the BBT (Tab. 4). A positive predictive value of 72% indicates that 72% of patients with a positive test (a BBT of 45) were classified as fallers (the gold standard) and 28% of the patients were misclassified as fallers based on the BBT, an error rate that we consider to be fairly high. A negative predictive value of 85% indicates that 85% of patients with a negative test (a BBT of 45) were classified as nonfallers (the gold standard). Our misclassification rate for nonfallers is less than for fallers (ie, we can be more confident about identifying nonfallers than fallers based on BBT test results). As with sensitivity and specificity, no standard exists for what constitutes an acceptable level of positive or negative predictive value. In addition, interpretations of predictive values, sensitivity, and specificity are not always straightforward. In the next section, we attempt to describe the critical issues that we believe should be considered when interpreting validity indexes. Issues Related to the Interpretation of Sensitivity, Specificity, and Predictive Values Some tests have a binary outcome (2 mutually exclusive categories such as present or absent), but many other test results are reported on an ordinal scale (such as the manual muscle test) or a continuous scale (such as the BBT). When using sensitivity, specificity, and predictive values, the researcher is forced to dichotomize
944 . Riddle and Stratford

results for ordinal and continuous measures (such as the BBT) and, therefore, may lose information about the usefulness of the test. One example is the use of a single cutoff point of 45 for the BBT. We will show later how some researchers have dealt with the problem of only one cutoff point for continuous measures. The choice of the cutoff point influences the sensitivity, specificity, and positive and negative predictive values. This concept is illustrated in Table 4. For example, if the cutoff point for the BBT were set at 40, the sensitivity would be 45% and the specificity would be 96%. With a cutoff point of 50, the sensitivity is 85% and the specificity is 73%. Generally, the choice of cutoff point by the researcher will increase one validity index (eg, sensitivity) but will decrease the other validity index (eg, specificity). For example, when sensitivity rises (as seen when going from a cutoff point of 40 to a cutoff point of 50 on the BBT), specificity falls. The same concept holds for positive and negative predictive values. When the positive predictive value rises (as seen when going from a cutoff point of 50 to a cutoff point of 40 on the BBT), the negative predictive value falls (Tab. 4). The principal factor influencing the clinicians choice of a cutoff point is related to the consequence of misclassifying patients. Broadly speaking, there are 3 choices for a cutoff point: (1) maximize both sensitivity and specificity, (2) maximize sensitivity at the cost of minimizing specificity, and (3) maximize specificity at the cost of minimizing sensitivity. Maximizing sensitivity and specificity is appropriate when the consequences of false positives and false negatives are about equal. Maximizing
Physical Therapy . Volume 79 . Number 10 . October 1999

sensitivity at the cost of minimizing specificity is desirable when the consequence of a false negative (eg, falsely identifying a subject as a nonfaller) exceeds the consequence of a false positive (eg, falsely identifying the subject as a faller). Conversely, maximizing specificity at the cost of minimizing sensitivity is desirable when the consequence of a false positive exceeds the consequence of a false negative. In the case of the BBT, it would appear that sensitivity should be optimized to avoid classifying a faller as a nonfaller. Misclassifying fallers would appear to have serious consequences (eg, fractures). An important advantage associated with the use of sensitivity and specificity is that they are not influenced by prevalence. Prevalence is defined as the proportion of patients with the disorder of interest among all patients tested.1 A therapist can use sensitivity and specificity estimates from a published report and apply these estimates to a patient as long as the patient is reasonably similar to the subjects in the study. Predictive values should guide clinical decisions (they estimate validity in the context of the test result), but unlike sensitivity and specificity, predictive values are prevalence dependent.1 That is, as the proportion of those with the disease changes, predictive values also change. Predictive values, therefore, vary when the prevalence of the disorder of interest changes. As the prevalence increases, the positive predictive value increases and the negative predictive value decreases. When the prevalence decreases, the positive predictive value decreases and the negative predictive value increases. Because the chance that an individual patient will have a target disorder varies (ie, the pretest probability changes depending on the patients signs and symptoms), the prevalence associated with a diagnostic accuracy study may not apply to a given patient. For example, in the study by Shumway-Cook et al,4 there was a prevalence of fallers of 50%. If, for example, a clinician estimated the pretest probability of falling for a patient to be only 10%, the predictive values from the data of Shumway-Cook et al would not provide accurate estimates of positive or negative predictive values for the patient. The positive predictive value from the data of Shumway-Cook and colleagues would be spuriously high (because of the higher prevalence), and the negative predictive value would be spuriously low for the patient with a pretest probability of 10%. Unfortunately, predictive values are influenced by prevalence, whereas sensitivity and specificity are not. Sensitivity and specificity, however, are related to positive and negative predictive values in the following way. When specificity is high, the positive predictive value tends to be high, and when sensitivity is high, the negative predictive value tends to be high. That is, when sensitivity is high, a negative test generally indicates the disorder

is not present (or, in our illustration, the person is not at risk of falling). When specificity is high, a positive test generally indicates the disorder is present (the person is at risk of falling).2 Table 4 illustrates this concept. When specificity is high, for example, for a BBT cutoff point of 40 (96%), the positive predictive value will generally be high (83%). A clinician might hypothetically believe, for example, that based on medical history and examination data, a patient had a pretest probability of falling of approximately 40% and the patient might subsequently have a score of 37 on the BBT, a score considered positive using a cutoff point of 40 (Tab. 4). The positive predictive value would be 83%, an increase of 43 percentage points from the pretest probability. We contend that the clinician can be reasonably confident the patient is a faller. Similarly, when sensitivity is high (97% for a cutoff point of 55), the negative predictive value will also generally be high (95%). For example, a clinician might believe, based on a patients medical history and examination data, that the patient had a pretest probability of falling of approximately 40% (or a pretest probability of not falling of 60%). The patient might subsequently have a score of 56 on the BBT, a score considered negative using a cutoff point of 55 (Tab. 4). The negative predictive value (posttest probability) in this hypothetical example would be 95%, and we argue that the clinician can be very confident the patient is not a faller. We noted earlier that predictive values are dependent on prevalence, and in our examples, the prevalence (pretest probability) for falls was estimated to be 40%, a reasonable approximation of the prevalence reported in our illustration using the BBT data. Had the pretest probabilities for the patient examples been appreciably lower or higher, the predictive values reported in the 2 examples above would not have been accurate estimates of posttest probability. In summary, sensitivity and specificity are not dependent on prevalence and are therefore seen as useful for clinical practice.1 As a general guide, we believe clinicians should conclude the condition is likely to be present when a test is positive and the specificity for the test is high. Conversely, clinicians should conclude the condition is likely to be absent when a test is negative and the sensitivity for the test is high.1,2 Positive and negative predictive values are, in part, prevalence dependent. As a result, we argue that predictive values are meaningful only when the prevalence reported in a study approximates the pretest probability of the disorder the clinician has estimated for the patient. To be most accurate, pretest probability estimates should be based on sound scientific data. Confidence Intervals for Validity Indexes Sensitivity, specificity, positive and negative predictive values, and likelihood ratios represent point estimates of

Physical Therapy . Volume 79 . Number 10 . October 1999

Riddle and Stratford . 945

Table 5.
Positive Likelihood Ratios for Several Different Intervals of Berg Balance Test Scores Gold Standard Test Result Berg Balance Test Result 40 40 44 45 49 50 54 54 Total
a

Positive Number 15 6 7 4 1 33 Proportion 15/33 6/33 7/33 4/33 1/33 0.455 0.182 0.212 0.121 0.03

Negative Number 3 5 13 36 20 77 Proportion 3/77 5/77 13/77 36/77 20/77 0.039 0.065 0.169 0.467 0.26

Positive Likelihood Ratio (95% CIa) 11.7 2.8 1.3 0.3 0.1 (3.6 37.6) (0.9 8.5) (0.52.9) (0.1 0.7) (0.02 0.8)

CI confidence interval.

population values.15 Point estimates are estimations of the true value for the index of interest. To determine the accuracy of a point estimate, confidence intervals (CIs) are calculated.15 Confidence intervals indicate how closely a studys point estimate of these values approximate the population values.15 Confidence intervals essentially describe for clinicians how confident they can be about a point estimate. For example, if sensitivity was 80%, with a 95% CI of 70% to 90%, the true value for sensitivity in the population (with 95% certainty) lies between 70% and 90%. The width of a CI becomes narrower as the sample size increases, and it becomes wider as the sample size decreases.15 In addition, the width is dependent on the variability of the measure with the population.15 The degree of confidence we place on these validity estimates can be calculated.1,18 In our view, studies that examine the validity of diagnostic tests should provide CI estimates. For example, the 95% CI for specificity reported by Bogle Thorbahn and Newton3 ranged from 67% (not very specific) to 100% (perfect specificity). The 95% CI for specificity for the combined data from the studies of Bogle Thorbahn and Newton3 and Shumway-Cook et al4 ranged from 83% to 97% (both values, in our opinion, represent reasonably high specificity). Likelihood Ratios Positive and negative likelihood ratios are 2 additional validity indexes for diagnostic tests. Likelihood ratios have been proposed to be more efficient and more powerful than sensitivity, specificity, and predictive values.15,19 Likelihood ratios essentially combine the benefits of both sensitivity and specificity into one index.1 Likelihood ratios indicate by how much a given diagnostic test result will raise or lower the pretest probability of the target disorder.20 Likelihood ratios are reported in a decimal number format rather than as percentages. A likelihood ratio of 1 means the posttest probability
Likelihood ratios should not be confused with odds ratios. Odds ratios are an estimate of risk often expressed in case-control studies designed to investigate causation of a disease.

(probability of the condition after the test results are obtained) for the target disorder is the same as the pretest probability (probability of the condition before the test was done). Likelihood ratios greater than 1 increase the chance the target disorder is present, whereas likelihood ratios less than 1 decrease the chance the target disorder is present.20 Jaeschke and colleagues20 proposed the following guide to interpreting likelihood ratios. Likelihood ratios greater than 10 or less than 0.1 generate large and often conclusive changes from pretest to posttest probability. Likelihood ratios between 5 and 10 or between 0.2 and 0.1 generate moderate changes from pretest to posttest probability. Likelihood ratios from 2 to 5 and from 0.5 to 0.2 result in small (but sometimes important) shifts in probability, and likelihood ratios from 0.5 to 2 result in small and rarely important changes in probability. Because likelihood ratios can be applied to score intervals for tests with continuous measures, we believe they are more useful than sensitivity, specificity, and predictive values, which are limited to data presented in a dichotomous format. For example, the positive likelihood ratio for the score interval of 40 to 44 (a test score considered positive based on recommendations of Berg and colleagues8) is 2.8 (Tab. 5). This likelihood ratio indicates that a patient with a BBT score between 40 and 44 is 2.8 times more likely to be a faller than a nonfaller. The 95% CI ranges from 0.9 to 8.5. That is, the 95% CI overlaps 1 (no change in the probability of the disorder); therefore, a clinician cannot be very confident that a score between 40 and 44 increases the probability of identifying a patient at risk for falls. If a patient scores below 40 on the BBT, however, the likelihood ratio increases to 11.7 (95% CI 3.6 37.6). A patient with a BBT score below 40 is at greater risk for falls as compared with patients with scores between 40 and 44. On average, patients with BBT scores less than 40 are almost 12 times more likely to be a faller than a nonfaller.

946 . Riddle and Stratford

Physical Therapy . Volume 79 . Number 10 . October 1999

Applications of Likelihood Ratios to Clinical Practice Likelihood ratios can also be calculated for several different cutoff points of the BBT (Tab. 4). Scores below the cutoff are considered positive tests, and scores above the cutoff are considered to be negative tests. Because the scale is dichotomized when using cutoffs, both positive and negative likelihood ratios can be calculated. For example, given a BBT cutoff point of 40, the positive likelihood ratio is 11.7 (95% CI 3.6 37.6). That is, a patient with a score of less than 40 is approximately 12 times more likely to be a faller than a nonfaller. The negative likelihood ratio is 0.6 (95% CI 0.4 0.8). That is, a patient with a negative BBT score (score of 40) is 0.6 times as likely to be a faller as a nonfaller. When using a cutoff point of 40, for a negative score (score of 40), a patient is more likely to be a nonfaller than a faller. Based on the data summarized in Table 4, lower cutoffs will usually increase the magnitude of the positive likelihood ratio (a desirable trait), but they will also increase the magnitude of the negative likelihood ratio (an undesirable trait). Another advantage of the use of likelihood ratios is that, along with the use of a nomogram (Figure), a clinician can determine the probability of a disorder, given the result of the test (also called posttest probability).21 Because likelihood ratios do not vary when disorder prevalence varies, likelihood ratios can be generalized to other patients. To use the nomogram, the clinician must first estimate the pretest probability of the disorder. The pretest probability of the disorder (likelihood of the presence of the disorder prior to doing the test) is estimated, as mentioned earlier, by the clinicians own clinical training and experience with similar types of patients in the specific setting in which the patients are seen.2 The constellation of signs and symptoms also influences the clinicians judgment of the pretest probability of the disorder. If we knew the likelihood ratios for each of the medical history items and signs and symptoms of patients, we could repeatedly recalculate the pretest and posttest probability of the disorder of interest and come up with a very accurate estimate of the final posttest probability.20 Most of these data, unfortunately, are unavailable, so clinicians typically must rely on training, experience, and knowledge of the literature to estimate the pretest probability of the disorder. To use the nomogram, the clinician simply estimates the pretest probability of the disorder and identifies this value in the left-hand column of the nomogram (Figure). A straightedge is then anchored on the left column of the Figure at the pretest probability estimate and aligned on the middle column at the likelihood ratio. The right column indicates the posttest probability. To demonstrate how likelihood ratios and the nomogram can be used to guide clinical decision making, we

Figure.
Nomogram for interpreting diagnostic test results. Reprinted with permission from Fagan.21 Copyright 1975, Massachusetts Medical Society. All rights reserved.

will apply our concept and argument to 2 hypothetical situations. For the first example, assume your 67-year-old patient lived alone in her home and was independent and relatively active. Her only comorbidity was that she had a hip joint replacement 1 year prior to testing. The therapist suspected the pretest probability of the disorRiddle and Stratford . 947

Physical Therapy . Volume 79 . Number 10 . October 1999

der (falls, in this case) would be relatively low, perhaps on the order of 20%. The patient then had a BBT done and a score of 50 (a negative test, using a cutoff point of 50) was obtained. The negative likelihood ratio for a cutoff point of 50 is 0.2 (Tab. 4). We align a ruler with the left column of the nomogram (Figure) at 20 (20% pretest probability) and with the middle column at a likelihood ratio of approximately 0.2. We find that the posttest probability of current fall risk for this patient is approximately 5%, an improvement of 15 percentage points from the pretest probability (the chance of the patient being a faller has gone from 20% down to 5%). Hypothetically, we substantially increased our level of certainty about the patients current risk of falling based on the BBT score. Our second hypothetical example is about a 75-year-old man who was diagnosed with congestive heart failure approximately 5 years previously and requires assistance with some activities of daily living. He reports losing his balance occasionally and remembers falling once in the past few years. Based on the patients medical history and functional status, the pretest probability for falls would be fairly high (ie, on the order of 50%). A BBT was done, and a score of 38 (a positive test, using a cutoff point of 40) was obtained. Using the data in Table 4, the positive likelihood ratio for a score of less than 40 is 11.7. That is, this patient is 11.7 times more likely to be a faller than a nonfaller. Using the nomogram shown in the Figure, the posttest probability for current fall risk is approximately 92%, an increase of 42 percentage points above the pretest probability. If we believe our data are correct and our estimates are appropriate, we can theoretically be confident that we have identified a patient who has a very high probability of falling. We again appear to have substantially increased our level of certainty about the patients risk of falling. Summary Validity indexes for diagnostic tests were reviewed, and terms used in studies designed to describe the validity of diagnostic tests were defined. Data from 2 studies examining the validity of measurements obtained with the BBT for inferring current fall risk were used as an illustration to demonstrate how clinicians could use diagnostic test studies to guide clinical decisions for individual patients. Unfortunately, there are only a small number of diagnostic test studies describing the validity of examination procedures commonly used by physical therapists. There is an urgent need to conduct more studies of the usefulness of diagnostic and prognostic tests in physical therapy. Acknowledgments We thank Dr Anne Shumway-Cook, Linda Thorbahn, and Dr Roberta Newton for their insights and for allowing us to use their data in this article. We also thank Cheryl Ford-Smith and Sue Cromwell for reviewing an earlier version of the manuscript.

References
1 Sackett DL, Haynes RB, Guyatt GH, Tugwell P. Clinical Epidemiology: A Basic Science for Clinical Medicine. 2nd ed. Boston, Mass: Little, Brown and Co Inc; 1991:85 86. 2 Sackett DL, Richardson WS, Rosenberg W, Haynes RB. Evidence-based Medicine: How to Practice and Teach EBM. New York, NY: Churchill Livingstone Inc; 1997. 3 Bogle Thorbahn LD, Newton RA. Use of the Berg Balance Test to predict falls in elderly persons. Phys Ther. 1996;76:576 583. 4 Shumway-Cook A, Baldwin M, Polissar NL, Gruber W. Predicting the probability for falls in community-dwelling older adults. Phys Ther. 1997;77:812 819. 5 Task Force on Standards for Measurement in Physical Therapy. Standards for tests and measurements in physical therapy practice. Phys Ther. 1991;71:589 622. 6 Berg KO, Wood-Dauphinee SL, Williams JI, Gayton D. Measuring balance in the elderly: preliminary development of an instrument. Physiotherapy Canada. 1989;41:304 311. 7 Berg KO, Maki BE, Williams JI, et al. Clinical and laboratory measures of postural balance in an elderly population. Arch Phys Med Rehabil. 1992;73:10731080. 8 Berg KO, Wood-Dauphinee SL, Williams JI, Maki B. Measuring balance in the elderly: validation of an instrument. Can J Public Health. 1992;83(suppl 2):S7S11. 9 Tinetti ME. Performance-oriented assessment of mobility problems in elderly patients. J Am Geriatr Soc. 1986;34:119 126. 10 Mahoney FL, Barthel DW. Functional evaluation: the Barthel index. Md State Med J. 1965;14:61 65. 11 Department of Clinical Epidemiology and Biostatistics, McMaster University. How to read clinical journals, II: to learn about a diagnostic test. Can Med Assoc J. 1981:124:703710. 12 Department of Clinical Epidemiology and Biostatistics, McMaster University. Interpretation of diagnostic data, 2: how to do it with a simple table (part A). Can Med Assoc J. 1983:129:511. 13 Department of Clinical Epidemiology and Biostatistics, McMaster University. Interpretation of diagnostic data, 2: how to do it with a simple table (part B). Can Med Assoc J. 1983:129:1217. 14 Department of Clinical Epidemiology and Biostatistics, McMaster University. Interpretation of diagnostic data, 2: how to do it with simple math. Can Med Assoc J. 1983:129:2229. 15 Sackett DL. A primer on the precision and accuracy of the clinical examination. JAMA. 1992;267:2638 2644. 16 Luukinen H, Koski K, Kivela SL, Laippala P. Social status, life changes, housing conditions, health, functional abilities, and lifestyle as risk factors for recurrent falls among the home-dwelling elderly. Public Health. 1996;110:115118. 17 Tinetti ME, Speechley M, Ginter SF. Risk factors for falls among elderly persons living in the community. N Engl J Med. 1988;319:17011707. 18 Colton T. Statistics in Medicine. Boston, Mass: Little, Brown and Co Inc; 1974:160. 19 Crombie DL. Diagnostic process. J Coll Gen Prac. 1963;6:579 589. 20 Jaeschke R, Guyatt GH, Sackett DL. Users guides to the medical literature, III: how to use an article about a diagnostic test, B: What are the results and will they help me in caring for my patients? JAMA. 1994;271:703707. 21 Fagan TJ. Nomogram for Bayes theorem [letter]. N Engl J Med. 1975;293:257.

948 . Riddle and Stratford

Physical Therapy . Volume 79 . Number 10 . October 1999

Você também pode gostar