Escolar Documentos
Profissional Documentos
Cultura Documentos
GrOUD T
Gr<
<n
wo
in - 37)
/
( n - 5)
/
a
>u
P
0ne_n
- 3
y
/
5rouD
(n =
\
Group Fou
(n = 6)
t... ,
y
Thre*
= 18)
r
Perfect Ac
y
y
Overconfia
i 1 1 1
ent
comprehension skills can be increased, these increases
should have been evident by the end of this 15-week course.
We identified gains in accuracy by performing linear
regression analyses of test performance on prediction and
postdiction for each of the three exams and then compared
the regressions across the course. Results of the linear
regression analyses for each of the three exams are shown in
Table 1. Increasing R
2
values for prediction (.08, .19, and
.24) but consistent values for postdiction (.25, .25, .27)
indicate that prediction accuracy increased with each exam,
whereas postdiction accuracy remained relatively consistent
across the three exams. Greater R
2
values for postdiction
than prediction on the first two exams indicate that postdic-
tive accuracy exceeded predictive accuracy, which is consis-
0)
o
o
en
n
3
100
95
90
85
80
75
70
65
60
55
50
45
40
. Underconfident
y
y
Group
/
{
Two
-22)
y
C
3roup
(n -
One
/
Gro
/ ^
inTh
Cn - 3
i
I
] Group Four
C
(n - 7 )
1 I
n =
1
1/)
I
Perfect Ac
y
ree.
4)
Overconfident
1 1 1
Predicted
Postdicted
40 45 50 55 60 65 70 75 80 85 90 95 100
Predicted and Postdicted Scores
Figure 3. Mean performance versus mean predicted and postdicted performance by subgroups for
Exam 3.
TEST PREDICTION AND PERFORMANCE 165
Table 1
Prediction and Postdiction Regressions for Each Exam
With Test Performance as the Outcome Variable
Exam
1
2
3
**D<
y =
> =
.005.
Prediction
= 36.44
= 37.22
= 10.91
+ .
+ .
+ .
<
39*
50*
76*
.001.
R
2
.08**
24***
Postdiction
y = 39.05 + .41*
y = 36.91 + .51*
y = 21.23 + .65*
R
2
25***
25***
27***
tent with other studies of accuracy (e.g., Glenberg &
Epstein, 1985; Glenberg et al., 1987; Maki & Seira, 1992).
However, the similar R
2
values for prediction and postdic-
tion on the last exam suggest that by the third exam,
prediction accuracy was approaching a ceiling.
As we stated earlier, postdictions are based, in part, on
students' knowledge of the nature of the test and test items
and how well they believe they performed. This privileged
knowledge gained after the test generally contributes to
higher postdictive than predictive accuracy (e.g., Glenberg
& Epstein, 1985; Glenberg et al., 1987; Maki & Serra,
1992). Indeed, with the benefit of this knowledge, greater
predictive than postdictive accuracy would be difficult to
explain. The stable postdictive accuracy across the three
exams suggests that students were making the most of this
privileged knowledge. If one accepts that predictive accu-
racy cannot exceed postdictive accuracy, then the extent to
which predictive accuracy can increase is restricted by the
value of postdictive accuracy. Therefore, it appears that the
increases in predictive accuracy may have reached a maxi-
mum by the end of the 15-week course.
Because of the relation between accuracy and perfor-
mance indicated in the analyses above and in other studies of
accuracy (e.g., Maki & Berry, 1984; Shaughnessy, 1979;
Sinkavich, 1995), we separated into two groups those
students who had exhibited high performance at the begin-
ning and end of the course from those who had exhibited low
performance. We again performed linear regression analyses
of performance on prediction and postdiction for each group
using only the first and third exams. Students who scored
above the median on both the first and third exams were
assigned to a high performance group, and students who
scored below the median on both exams were assigned to a
low performance group. Students who scored at the median
for one of the two exams were placed in the high or low
group on the basis of whether their other score fell above or
below the median. The median score on each of these exams
was 68; therefore, confounding effects of test difficulty on
judgments of performance were minimized. The low perfor-
mance group consisted of 38 students, and the high perfor-
mance group consisted of 35. Results of the analyses are
shown in Table 2.
For the low performance group, prediction and postdic-
tion regressions were nonsignificant for both exams. Low-
performing students showed poor prediction and postdiction
accuracy on the first exam, and they showed no improve-
ment by the end of the course. High performance students
also showed low predictive and postdictive accuracy on the
first exam. By the end of the course, however, high
performance students showed increases in predictive accu-
racy (R
2
= .24) and especially postdictive accuracy
(R
2
= .46).
Our earlier regression analyses, which did not differenti-
ate by performance, indicated that prediction accuracy
increased across the course, whereas postdiction accuracy
remained relatively constant. However, the analysis of
performance subgroups suggests that the increase in predic-
tion accuracy may have been due primarily to the high-
performing students. Also, high-performing students showed
a large increase in postdictive accuracy between Exams 1
and 3, whereas low-performing students showed no in-
crease. It appears, therefore, that the postdictive stability
across the three exams that was noted earlier may have been
due to a moderating effect on the R
2
values caused by the low
performing students. Whether high-performing students
would continue to show increases in predictive and postdic-
tive accuracy with further feedback over still longer periods
of time is a question for future research.
To What Extent Do Prior Performance and Judgments
of Performance Influence Subsequent Judgments?
Another goal of this study was to examine the relations
among prior performance, judgments of performance, and
subsequent judgments of performance. We predicted that as
the course progressed, students would learn to base their
judgments of performance on the more accurate predictor,
prior performance. Therefore, the relation between judg-
ments of performance and prior judgments would decrease
with a corresponding increase in the relation between
judgments of performance and prior performance. It is
Table 2
Prediction and Postdiction Regressions for High Versus Low Performing Students for
Exams 1 and 3 With Test Performance as the Outcome Variable
Regression
Exam 1
Prediction
Postdiction
Exam 3
Prediction
Postdiction
**/> < .005. ***D <
Low performance
y =
V =
V =
y =
.001.
= 55.03 + .04*
= 56.91 + .02*
= 60.18 - .02*
= 63.13 - .06*
R
2
.002
.002
.0002
.004
High performance
y = 60.97 + .22*
y = 52.40 + .34*
y = 37.60 + .51*
y = 32.39 + .59*
R
2
.08
.11
.24**
.46***
166 HACKER, BOL, HORGAN, AND RAKOW
Table 3
Correlation Matrix and Descriptive Statistics for Variables Used in Multiple Regression Analyses
Variable
I. Prediction 1
2. Prediction 2
3. Prediction 3
4. Postdiction 1
5. Postdiction 2
6. Postdiction 3
7. Hours 1
8. Hours 2
9. Hours 3
10. Exam 1
11. Exam 2
12. Exam 3
M
SD
1
.43
.23
.47
.47
.24
.24
.06
-. 25
.36
.31
.03
79.64
8.90
2
.55
.40
.74
.50
.15
.39
.12
.29
.46
.21
75.34
9.60
3
.04
.43
.59
.25
.46
.30
.25
.41
.54
76.64
6.97
4
.49
.36
.12
-. 14
-. 15
.55
.27
.06
70.01
15.59
5
.62
.13
.40
.10
.33
.53
.30
73.22
10.81
6
.20
.42
.36
.28
.35
.50
74.24
8.46
7
.32
.32
.26
.29
.16
7.83
4.57
8
.63
-. 03
.30
.30
11.60
8.63
9
-. 13
.15
.24
9.94
8.80
10
.68
.54
68.20
12.40
11
.72
74.74
11.88
12
69.36
12.09
Note. Correlations are based on completed pairs.
possible, though, that judgments of performance are medi-
ated by the number of hours students studied for a test. For
example, one who studies a great deal for a test will likely
give more confident predictions of performance than one
who studies very little. Therefore, in addition to the variables
of performance and judgments of performance, we also
investigated the number of hours studied.
Standard multiple regression analyses were performed for
each of the three exams, with predictions and postdictions as
the dependent variables. The independent variables for each
regression analysis included all measures of predictions,
postdictions, performance, and hours studied taken prior to
the prediction or postdiction under question. Hence, each
regression analysis differed in its collection of independent
variables. The correlation matrix and descriptive statistics
for the variables used in the multiple regression analyses are
shown in Table 3.
Predictive and postdictive judgments on Exam 1. The
only variable measured prior to the first prediction was
number of hours studied. The regression of prediction on
number of hours studied for Exam 1 was not significant
(r .19, p .07). For postdiction on Exam 1, the indepen-
dent variables were prediction of Exam 1 and number of
hours studied. Table 4 shows the unstandardized (B) and
standardized (/3) regression coefficients, the squared semipar-
tial correlations (sr
2
), and the R, R
2
, and adjusted R
2
values.
Only prediction of Exam 1 contributed significantly to
postdiction (sr
2
= .22). Thus, optimism in prediction contrib-
uted to optimism in postdiction. Surprisingly, amount of
time studying for the exam did not contribute significantly to
either prediction or postdiction. A reasonable hypothesis is
that more study time would be associated with greater
confidence in the material studied and hence contribute to
more optimistic predictions and postdictions. Because this
was the first exam, perhaps students were uncertain about
what to study and how much to study and, therefore, were
uncertain of what impact, if any, their study time and effort
would have on their performance.
Predictive and postdictive judgments on Exam 2. The
independent variables measured for predicting performance
on Exam 2 were prediction and postdiction of Exam 1,
number of hours studied for Exams 1 and 2, and score on
Exam 1. The results of the standard multiple regression (see
Table 4) show that number of hours studied for Exam 2
(sr
2
= .16) contributed the most to the prediction of Exam 2,
Table 4
Multiple Regressions With Postdiction of Exam 1 and Prediction and Postdiction
of Exam 2 as Outcome Variables
Variable
Prediction Exam 1
Prediction Exam 2
Postdiction Exam 1
Hours studied Exam 1
Hours studied Exam 2
Exam 1
R
2
Adjusted R
2
R
B
.78
.02
.22
.21
.47*
Postdiction
P
.47 .
.01 <
**
1
sr
2
22***
01
Prediction 2
B
.31
.21
-.33
.52
.05
.45
.42
.67**
P
.29 .
.32 .
-.17 .
.46 .
.06 <
sr
2
06**
06**
02
]***
01
Postdiction 2
B
.18
.55
.19
-. 24
.33
.01
.67
.64
.82***
3
.15
.48
.26
-.11
.27
.01
sr
2
.02
19***
.05*
.01
.04**
<.01
*p < .05. **p < .005. ***p < .001.
TEST PREDICTION AND PERFORMANCE 167
followed by prediction (sr
2
= .06) and postdiction (sr
2
.06)
of Exam 1.
For postdiction of performance on Exam 2, the indepen-
dent variables were prediction of Exams 1 and 2, postdiction
of Exam 1, number of hours studied for Exams 1 and 2, and
score on Exam 1. The results of the standard multiple
regression (see Table 4) show that prediction of Exam 2
(sr
2
= .19) was the primary contributor to postdiction of
Exam 2, followed by postdiction of Exam 1 (sr
2
= .05) and
number of hours studied for Exam 2 (sr
2
= .04).
The relation between prediction and postdiction that was
evident in the first exam was again evident in the second
exam: Optimistic predictions contributed to optimistic post-
dictions. In contrast to Exam 1, students on average reported
studying nearly 4 hr more for Exam 2, and this greater study
time contributed strongly to their predictions and, to some
extent, their postdictions for Exam 2. Expecting to do well
on a test because one has studied a lot is a reasonable
expectation, as is expecting that one has done well because
of studying. Thus, the hypothesis that was proposed in the
analysis of Exam 1 concerning an association between study
time and confidence in one's knowledge and performance
was supported by the results from Exam 2.
Prior predictions and postdictions contributed to both
prediction and postdiction of Exam 2, but prior performance
did not. This provides support for at least part of our third
hypothesis. We predicted that early in the semester, students
would rely more on prior judgments of performance than on
prior performance in making subsequent judgments of
performance. However, we predicted that this reliance on
prior judgments of performance would later shift to prior
performance. Whether this shift occurred is examined next.
Predictive and postdictive judgments on Exam 3. The
independent variables measured for prediction of Exam 3
were predictions and postdictions of Exams 1 and 2; number
of hours studied for Exams 1, 2, and 3; and scores on Exams
1 and 2. The results of the standard multiple regression (see
Table 5) show that only prediction of Exam 2 (sr
2
= .08) and
postdiction of Exam 1 (sr
2
.01) contributed significantly
to the prediction of Exam 3. In contrast to Exam 2 but
similar to Exam 1, number of hours studied did not
contribute significantly to prediction. The lack of a signifi-
cant contribution from hours studied was likely due to the
nearly 2-hr decrease in study time from Exam 2. In addition,
because of the low correlation (r = .15, p = .16) between
performance on Exam 2 and number of hours studied for
Exam 3, the decrease in study time for Exam 3 appears to
have occurred for all students, regardless of how they
performed on Exam 2.
The independent variables measured for postdicting Exam
3 were prediction of Exams 1,2, and 3; postdiction of Exams
1 and 2; number of hours studied for Exams 1, 2, and 3; and
scores on Exams 1 and 2. The results of the standard
multiple regression are shown in Table 5. Prediction of
Exam 3 (sr
2
= .13), postdiction of Exam 2 (sr
2
= .04), and
number of hours studied for Exam 3 (sr
2
= .04) contributed
significantly to students' postdictions of Exam 3. As with
Exams 1 and 2, prediction was the single largest contributor
to postdiction: Optimistic expectations for performance
contributed to optimistic evaluations of performance. In
addition, even though students devoted about 2 hr less to
study for Exam 3 than Exam 2, number of hours studied for
Exam 3 uniquely contributed to postdiction. Therefore,
students' optimistic expectations for their performance may
have served as a kind of anchor to their optimistic evalua-
tions of performance, and this optimism was adjusted on the
basis of the number of hours studied.
Because actual performance did not contribute signifi-
cantly to either prediction or postdiction of Exam 3, our
hypothesis that students would shift the basis of their
judgments from prior judgments of performance to prior
performance was not supported. Prior metacognitive judg-
ments of performance continued to have a greater associa-
tion with students' current judgments than actual perfor-
mance. Considering that past performance is the single best
predictor of future performance, the lack of a measurable
contribution from prior test performance is surprising. This
result is also surprising in light of the fact that students were
Table 5
Multiple Regressions With Prediction and Postdiction of Exam 3 as Outcome Variables
Variable
Prediction Exam 1
Prediction Exam 2
Prediction Exam 3
Postdiction Exam 1
Postdiction Exam 2
Hours studied Exam 1
Hours studied Exam 2
Hours studied Exam 3
Exam 1 performance
Exam 2 performance
R
2
Adjusted R
2
R
B
.05
.32
-. 14
.05
.07
.10
.12
.15
-. 02
.46
.38
.67***
Prediction 3
.07
.44
-. 30
.07
.05
.13
.15
.27
-. 03
j r
2
<.01
!08**
.01*
<.01
<.01
.01
.02
.02
<.01
B
-. 03
-. 14
.52
.12
.37
-. 07
.03
.25
.07
-. 11
.61
.55
.78***
Postdiction 3
(3
-. 04
-. 16
.43
.23
.47
-. 04
.03
.26
.10
-. 16
sr
2
cm
c.01
13***
.05
.04**
C.01
c.01
.04*
c.01
COl
*p < .05. **p < .005. ***/? < .001.
168 HACKER, BOL, HORGAN, AND RAKOW
asked after the two prior exams to compare their predictions
and postdictions with their actual performance and to plan
for the next exam. Prior test performance could have served
well as a basis for subsequent judgments of performance.
The Pearson correlations between exam scores were some of
the largest in this study: For Exams 1 and 2, r = .68; for
Exams 1 and 3, r = .54; and for Exams 2 and 3, r ~ .72.
Discussion
In contrast to studies that have shown poor metamemory
accuracy, higher performing students in the present study
demonstrated high accuracy. On each exam, the predictions
and postdictions of students who scored at or above 70%
differed from actual test performance by less than 8 percent-
age points, with students scoring 80% or better showing
slight underconfidence, and students scoring between 70 and
79% showing slight overconfidence. Consistent with prior
metamemory research (e.g., Glenberg & Epstein, 1985;
Glenberg et al., 1987; Lovelace & Marsh, 1985; Maki &
Serra, 1992), postdictions were more accurate than predic-
tions. Although Hertzog et al. (1990) observed reduced
levels of accuracy upgrading when complex cognitive tasks
were involved, in the present study, accuracy was upgraded
using very complex tasks.
Lower performing students showed strong overconfi-
dence in their predictions, with overconfidence becoming
greatly exaggerated the lower they scored. In contrast,
postdictions for all but the lowest scoring students were
about as accurate as higher performing students. Thus,
before an exam, lower performing students appeared less
able than higher performing students to self-assess their
knowledge of the course, to judge their knowledge against
what they expected to be tested, or both. Once lower
performing students were finished with an exam, however,
they were as able as higher performing students to self-
evaluate what they knew against what was tested.
The lowest performing students exhibited very poor
self-assessment and self-evaluation of their knowledge.
Although the number of students in this group was small,
their performance paints a potentially bleak picture of their
academic future. These students appear to lack not only
knowledge of the course content, but perhaps worse, lack an
awareness of their own knowledge deficits. Roeser, Eccles,
and Strobel (1998) have provided some useful insights that
could explain why this group of students makes highly
unrealistic and optimistic judgments of performance even in
the face of low performance. These researchers suggested
that students who
believe they are competent at learning despite significant
evidence to the contrary [may do so] because the self-relevant
information embodied in negative academic events does not
influence their perceptions of themselves as learners. Aca-
demic failures may simply generate negative perceptions of
others who are perceived as responsible for the difficulties, (p.
162)
Low-performing students, therefore, may believe that they
will do well on tests but then attribute their poor perfor-
mance to externalizing factors such as a tricky test or
unreasonable teacher.
Our analyses showed that by the end of the course,
high-performing students increased their predictive and
especially postdictive accuracy, but low-performing stu-
dents continued to show little predictive accuracy. Data from
the high performing students support our hypothesis that
with the benefit of internally generated and externally
provided feedback over multiple tests, predictive and postdic-
tive accuracy will increase. Alternatively, increases in accu-
racy could be explained by greater domain familiarity, which
has been associated with greater prediction accuracy (e.g.,
Glenberg & Epstein, 1987; Schommer & Surber, 1986).
High-performing students acquired more knowledge during
the semester and, correspondingly, showed greater accuracy
increases than those who acquired less knowledge. Differ-
ences between high and low performers could also be
explained by motivation differences. Koriat and Goldsmith
(1996) have argued that strong incentives are necessary for
people to make accurate metamemory judgments. We be-
lieved that there would be strong incentives to improve
accuracy because students were enrolled in a required
course. Perhaps this was true only for the high-performing
students. Low-performing students may have been resigned
to getting low grades, thereby diminishing their incentive for
increasing accuracy.
We predicted that early in the semester, students' judg-
ments of performance would be more strongly associated
with prior judgments of performance than actual prior
performance. However, because prior performance is one of
the best predictors of future performance, students would
learn to rely more on prior performance than on prior
judgments of performance. Results showed that a shift to
prior performance did not occur. Students continued to rely
on their prior metamemory judgments. These results put in
question the generality of Koriat's (1997) and Kelley and
Jacoby's (1996) notions of a shift from theory-based to
experience-based judgments. With greater experience gained
through repeated testing in a naturalistic context, students* a
priori theories or beliefs about the influences on memory
judgments did not give way to their actual experiences with
making judgments. Rather than basing judgments on actual
performance, students continued to give greater weight to
their expectations for performance.
Expecting to do well on an exam because one has a
history of high performance reflects a person's positive
self-appraisals of academic competence. Self-appraisals
may exert strong influences on measures of metamemory
accuracy. For example, students who often perform well and
who hold positive self-appraisals of academic competence
will likely make optimistic judgments of performance. If
these students then perform well, they demonstrate high
metamemory accuracy. However, their greater accuracy may
not be due to superior memory monitoring ability, but rather
to the strong relation between optimistic expectations for
performance and a history of high performance. In contrast,
students who make optimistic judgments of performance on
the basis of positive self-appraisals and who perform poorly
will demonstrate low metamemory accuracy. The low accu-
TEST PREDICTION AND PERFORMANCE 169
racy may not be due to an inability to monitor, but rather to
expectations for performance that are out of sync with actual
performance. This could explain the continued low accuracy
demonstrated by the lowest performing students in the
present study.
Finally, we expected that low performance on one test
would be associated with greater study time on the next test,
and, conversely, that high performance would be associated
with less study time. This pattern has received some
empirical support in laboratory studies of metamemory (e.g.,
Dunlosky & Connor, in press; Mazzoni & Cornoldi, 1993;
Nelson & Leonesio, 1988). In the present study, however,
study times were unrelated to prior performance. A straight-
forward explanation for the lack of an association between
prior performance and study time is that students may not
have accurately reported their study times. A more compel-
ling possibility is that students failed to use the results of
their monitoring to regulate subsequent behavior. Ability to
monitor one's cognition is no guarantee that one can or will
control subsequent cognition (e.g., Koriat & Goldsmith,
1996; Vosniadou, Pearson, & Rogers, 1988; Zabrucky &
Ratner, 1986). Psychological and contextual constraints
must be considered. In an actual class, there are complex
influences on students' allocation of study time. Pressley,
Van Etten, Yokoi, Freebern, and Van Meter (1998) argued
that the strategies students use to manage their study time
involve many aspects of students' lives, all of which must be
effectively juggled to maximize grades. For example, the
decrease in study time for Exam 3 may have been due to the
fact that Exam 3 was a final exam administered when
students had many demands to meet.
In conclusion, higher performing students in a naturalistic
context made accurate metamemory judgments, and their
accuracy increased. Future research needs to examine whether
and how these students use their more accurate judgments to
self-regulate studying and learning. Future research should
also investigate whether externalizing attributions affect the
judgments of low-performing students. To help low-
performing students become better self-regulators of then-
test preparation behaviors, their attributions may need as
much attention as their knowledge deficits.
References
Benjamin, A. S., & Bjork, R. A. (1996). Retrieval fluency as a
metacognitive index. In L. R. Reder (Ed.), Implicit memory and
metacognition (pp. 309-338). Hillsdale, NJ: Erlbaum.
Dunlosky, J., & Connor, L. T. (in press). Age differences in the
allocation of study time account for age differences in memory
performance. Memory & Cognition.
Dunlosky, J., & Hertzog, C. (1998). Training programs to improve
learning in later adulthood: Helping older adults educate mem-
selves. In D. J. Hacker, J. Dunlosky, & A. C. Graesser (Eds.),
Metacognition in educational theory and practice (pp. 249-
275). Hillsdale, NJ: Erlbaum.
Gigerenzer, G., Hoffrage, U., & Kleinbolting, H. (1991). Probabilis-
tic mental models: A Brunswikian theory of confidence. Psycho-
logical Bulletin, 98, 506-528.
GillstrOm, A., & Rbnnberg, J. (1995). Comprehension calibration
and recall prediction accuracy of texts: Reading skill, reading
strategies, and effort. Journal of Educational Psychology, 87,
545-558.
Glenberg, A. M., & Epstein, W. (1985). Calibration of comprehen-
sion. Journal of Experimental Psychology: Learning, Memory,
and Cognition, 11, 702-718.
Glenberg, A. M., & Epstein, W. (1987). Inexpert calibration of
comprehension. Memory <&. Cognition, 15, 84-93.
Glenberg, A. M., Sanocki, T, Epstein, W., & Morris, C. (1987).
Enhancing calibration of comprehension. Journal of Experimen-
tal Psychology: General, 116, 119-136.
Good, T. L., & Brophy, J. (1995). Contemporary educational
psychology. White Plains, NY: Longman.
Hertzog, C, Dixon, R. A., & Hultsch, D. F. (1990). Relationships
between metamemory, memory predictions, and memory task
performance in adults. Psychology and Aging, 5, 215-227.
Horgan, D. (1990). Competition, calibration, and motivation.
Teaching Thinking and Problem Solving, 12, 5-10.
Horgan, D., Bol, L., & Hacker, D. J. (1997, August). An examina-
tion of the relationships among self, peer, and instructor
assessment. Paper presented at the Symposium on New Assess-
ment Methods at the 7th Annual Conference of the European
Association for Research on Learning and Instruction, Athens,
Greece,
Horgan, D., Hacker, D. J., & Huffman, S. (1997, March). How
students predict their exam performance. Paper presented at the
Southern Society for Philosophy and Psychology, Atlanta,
Georgia.
Kelley, C. M., & Jacoby, L. L. (1996). Adult egocentrism:
Subjective experience versus analytic bases for judgment. Jour-
nal of Memory and Language, 35, 157-175.
Koriat, A. (1997). Monitoring one's own knowledge during study:
A cue-utilization approach to judgments of learning. Journal of
Experimental Psychology: General, 126, 349-370.
Koriat, A., & Goldsmith, M. (1994). Memory in naturalistic and
laboratory contexts: Distinguishing the accuracy-oriented and
quantity-oriented approaches to memory assessment. Journal of
Experimental Psychology: General, 123, 297-316.
Koriat, A., & Goldsmith, M. (1996). Monitoring and control
processes in the strategic regulation of memory accuracy.
Psychological Review, 103, 490-517.
Koriat, A., Lichtenstein, S., & Fischhoff, B. (1980). Reasons for
confidence. Journal of Experimental Psychology: Human Learn-
ing and Memory, 6, 107-118.
Liberman, V., & Tversky, A. (1993). On the evaluation of
probability judgments: Calibration, discrimination, and monoto-
nicity. Psychological Bulletin, 114, 162-173.
Lichtenstein, S., Fischhoff, B., & Phillips, L. D. (1982). Calibration
of probabilities: The state of the art to 1980. In D. Kahneman, P.
Slovic, & A. Tversky (Eds.), Judgment under uncertainty:
Heuristics and biases (pp. 306-334). New York: Cambridge
University Press.
Lovelace, E. A. (1984). Metamemory: Monitoring future recallabil-
ity during study. Journal of Experimental Psychology: Learning,
Memory, and Cognition, 10, 756-766.
Lovelace, E. A., & Marsh, G. R. (1985). Prediction and evaluation
of memory performance by young and old adults. Journal of
Gerontology, 40, 192-197.
Magliano, J. P., Little, L. D., & Graesser, A. C. (1993). The impact
of comprehension instruction on the calibration of comprehen-
sion. Reading Research and Instruction, 32, 49-63.
Maki, R. H. (1995). Accuracy of metacomprehension judgments
for questions of varying importance levels. American Journal of
Psychology, 108, 327-344.
Maki, R. H. (1998). Test predictions over text material. In D. J.
Hacker, J. Dunlosky, & A. C. Graesser (Eds.), Metacognition in
170 HACKER, BOL, HORGAN, AND RAKOW
educational theory and practice (pp. 117-144). Mahwah, NJ:
Erlbaum.
Maki, R. H., & Beny, S. (1984). Metacomprehension of text
material. Journal of Experimental Psychology: Learning,
Memory, and Cognition, 10, 663-679.
Maki, R. H., & Serra, M. (1992). The basis of test predictions for
text material. Journal of Experimental Psychology: Learning,
Memory, and Cognition, 18, 116-126.
Mazzoni, G., & Cornoldi, C. (1993). Strategies in study time
allocation: Why is study time sometimes not effective? Journal
of Experimental Psychology: General, 122, 47-60.
Nelson, T. O., & Leonesio, R. J. (1988). Allocation of self-paced
study time and the "labor-in-vain effect." Journal of Experimen-
tal Psychology: Learning, Memory, & Cognition, 14, 676-686.
Nelson, T. O., & Narens, L. (1994). Why investigate metacogni-
tion? In J. Metcalfe & A. P. Shimamura (Eds.), Metacognition:
Knowing about knowing (pp. 1-26). Cambridge, MA: MIT
Press.
Pressley, M., & Ghatala, E. S. (1989). Metacognitive benefits of
taking a test for children and young adolescents. Journal of
Experimental Child Psychology, 47, 430-450.
Pressley, M., Levin, J. R., Ghatala, E. S., & Ahmad, M. (1987). Test
monitoring in young grade school children. Journal of Experi-
mental Child Psychology, 43, 96-1U.
Pressley, ML, Snyder, B. L., Levin, J. R., Murray, H. G., & Ghatala,
E. S. (1987). Perceived readiness for examination performance
(PREP) produced by initial reading of text and text containing
adjunct questions. Reading Research Quarterly, 22, 219-236.
Pressley, M., Van Etten, S., Yokoi, L., Freebern, G., & Van Meter, P.
(1998). The metacognition of college studentship: A grounded
theory approach. In D. J. Hacker, J. Dunlosky, & A. C. Graesser
(Eds.), Metacognition in educational theory and practice (pp.
347-366).
Roeser, R. W., Eccles, J. S., & Strobel, K. R. (1998). Linking the
study of schooling and mental health: Selected issues and
empirical illustrations at the level of the individual. Educational
Psychologist, 33, 153-176.
Schommer, M., & Surber, J. R. (1986). Comprehension-monitoring
failure in skilled adult readers. Journal of Educational Psychol-
ogy, 78, 353-357.
Shaughnessy, J. J. (1979). Confidence-judgment accuracy as a
predictor of test performance. Journal of Research in Personal-
ity, 13, 505-514.
Sinkavich, F. J. (1995). Performance and metamemory: Do stu-
dents know what they don't know? Journal of Instructional
Psychology, 22, 77-87.
Vosniadou, S., Pearson, P. D., & Rogers, T. (1988). What causes
children's failures to detect inconsistencies in text? Representa-
tion versus comparison difficulties. Journal of Educational
Psychology, 80, 27-39.
Walczyk, J. J., & Hall, V. C. (1989). Effects of examples and
embedded questions on the accuracy of comprehension self-
assessments. Journal of Educational Psychology. 81, 435-437.
Weaver, C. A., ID. (1990). Constraining factors in calibration of
comprehension. Journal of Experimental Psychology: Learning,
Memory, and Cognition, 16, 214-222.
Weaver, C. A., EH, & Bryant, D. S. (1995). Monitoring of
comprehension: The role of text difficulty in metamemory for
narrative and expository text. Memory & Cognition, 23, 12-22.
Zabrucky, K., & Ratner, H. H. (1986). Children's comprehension
monitoring and recall of inconsistent stories. Child Develop-
ment, 57. 1401-1418.
Received March 23, 1998
Revision received June 10, 1999
Accepted June 10, 1999
ORDER FORM
Start my 2000 subscription to Journal of
Educational Psychology! ISSN: 0022-0663
$59.00, APA Member/Affiliate
$118.00, Individual Nonmember_
$234.00, Institution
In DC add 5.75% sales lax
TOTAL AMOUNT ENCLOSED $_
Subscription orders must be prepaid. (Subscriptions are on
a calendar basis only.) Allow 4-6 weeks for delivery of the
first issue. Call for international subscription rates.
SEND THIS ORDER FORM TO:
American Psychological Association
Subscriptions
750 First Street, NE
Washington, DC 20002-4242
Or call (800) 374-2721, fax (202) 336-5568.
TDD/TTY (202)336-6123. Email: subscriptions@apa.org
Send me a Free Sample Issue Q
Check Enclosed (make payable to APA)
Chargemy: Q VTSAQMasterCard American Express
Cardholder Name.
Card No.
Signature
Credit Card
Billing Address
Citv
Daytime Phone
SHIP TO:
Name
Address
Exp. date
(Required for Charge)
State Zip
City
APA Customer #
State. Zip.
GAD00
PLEASE 00 NOT REMOVE - A PHOTOCOPY MAY BE USED