Escolar Documentos
Profissional Documentos
Cultura Documentos
4, APRIL 2012
AbstractThe need for a better integration of the new genera- image-guided surgery, recent effort have been made on the cre-
tion of computer-assisted-surgical systems has been recently em- ation of context-aware systems. The idea is to understand the
phasized. One necessity to achieve this objective is to retrieve data current situation, and automatically adapt the assistance func-
from the operating room (OR) with different sensors, then to derive
models from these data. Recently, the use of videos from cameras tions accordingly. Being able to automatically extract informa-
in the OR has demonstrated its efficiency. In this paper, we pro- tion from the OR such as events, steps, or even adverse events
pose a framework to assist in the development of systems for the would allow the management of context-aware systems, but also
automatic recognition of high-level surgical tasks using microscope the postoperative generation of reports or the evaluation of sur-
videos analysis. We validated its use on cataract procedures. The geons/surgical tools use. The goal of CAS systems based on
idea is to combine state-of-the-art computer vision techniques with
time series analysis. The first step of the framework consisted in the surgery modeling is to retrieve low-level information from the
definition of several visual cues for extracting semantic informa- OR and then to automatically extract high-level tasks from these
tion, therefore, characterizing each frame of the video. Five differ- data [2]. Consequently, it would be beneficial for surgeons per-
ent pieces of image-based classifiers were, therefore, implemented. forming the surgery, allowing better systems management, auto-
A step of pupil segmentation was also applied for dedicated vi- matically reporting procedures, evaluating surgeons, increasing
sual cue detection. Time series classification algorithms were then
applied to model time-varying data. Dynamic time warping and surgical efficiency, and quality of care in the OR. This is why, in
hidden Markov models were tested. This association combined the the context of CAS systems, the automatic extraction of high-
advantages of all methods for better understanding of the prob- level tasks in the OR has recently emerged.
lem. The framework was finally validated through various studies. To design such models, there are real advantages to automat-
Six binary visual cues were chosen along with 12 phases to detect, ing the data extraction process, mainly because manual work is
obtaining accuracies of 94%.
time consuming and can be affected by human bias. Automatic
Index TermsDynamic time warping (DTW), feature extrac- data extraction is now easier, thanks to the high number of sen-
tion, hidden Markov models (HMM), surgical microscope, surgical sor devices. Among all sensors, teams have recently focused on
process model, surgical workflow, video analysis.
videos from cameras already employed during the procedure,
such as endoscopes or microscopes, which are a rich source of
I. INTRODUCTION information. Video images provide information that can be pro-
VER the past few years, the increased availability of sen- cessed by image-based analysis techniques, and then fused with
O sor devices has transformed the operating room (OR) into
a rich and complex environment. With this technological emer-
other data from different sensors for creating models captur-
ing all levels of surgery granularity. Moreover, computer vision
gence, the need of new tools for better resource support and techniques provide complex processing algorithms to transform
surgical assessment increases [1]. In parallel, better manage- images and videos into a new representation that can be fur-
ment of all devices, along with improved safety is increasingly ther used for machine learning techniques through supervised
necessary. In order to better apprehend these new challenges or nonsupervised classification. Compared to other data extrac-
in the context of computer-assisted-surgical (CAS) systems and tion techniques, the use of video not only eliminates the need
to install additional material in the OR, but it is also a source
of information that does not need to be controlled by humans,
Manuscript received July 18, 2011; revised October 12, 2011 and December thus automating the assistance provided to surgeons, without
14, 2011; accepted December 19, 2011. Date of publication December 23, 2011; altering the surgical routine.
date of current version March 21, 2012. Asterisk indicates corresponding author.
*F. Lalys is with the U1099 Institut National de la Sante et de la Recherche
Current work using surgical videos has made progress in clas-
Medicale and the Faculte de Medecine, University of Rennes I, Rennes F-35043, sifying and automating the recognition of high-level tasks in the
France (e-mail: florent.lalys@irisa.fr). OR. Four distinct types of input data associated with their re-
L. Riffaud is with the U1099 Institut National de la Sante et de la Recherche
Medicale and the Faculte de Medecine, University of Rennes I, Rennes F-35043,
lated applications have recently been investigated. First, the use
Cedex, France, and also with the Neurosurgical Department, University Hospital of external OR videos has been tested. Bhatia et al. [3] ana-
of Rennes, Rennes 35000, France (e-mail: laurent.riffaud@chu-rennes.fr). lyzed overall OR view videos. After identifying four states of
D. Bouget and P. Jannin are with the U1099 Institut National de la Sante et
de la Recherche Medicale and the Faculte de Medecine, University of Rennes I,
a common surgical procedure, relevant image features were ex-
Rennes F-35043, France (e-mail: david.bouget@irisa.fr; pierre.jannin@irisa.fr). tracted and HMMs were trained to detect OR occupancy. Padoy
Color versions of one or more of the figures in this paper are available online et al. [4] also used low-level image features through 3-D motion
at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TBME.2011.2181168
flows combined with hierarchical HMMs to recognize online
Fig. 5. Leftright HMM, where each state corresponds to one surgical phase
Fig. 3. Different stages of the segmentation step for the detection of instru-
ments (a) input image, (b) clean mask, (c) ROI corresponding to the first
connected component, and (d) ROI corresponding to the second connected Two different approaches were tested here: the HMM modeling
component.
and the classification by DTW alignment.
6) Hidden Markov Model: Hams [30] are statistical mod-
els used for modeling nonstationary vector times series. An
HMM is formally defined by a five-tuple (S, O, , A, B), where
S = {s1 . . . (sN )} is a finite set of N states, O = {o1 . . . (oM )}
is a set of M symbols in a vocabulary, = {(i)} are the initial
state probabilities, A = {a(ij)} the state transition probabili-
ties, and B = {bi (o(k))} the output probabilities. Here, outputs
of the supervised classification were treated as observations for
the HMM. States were represented by the surgical phases that
Fig. 4. SURF features detected on image from Fig. 3(a), and shown as blue
circles. (a) SURF points on the first connected component. (b) SURF points on
generated a leftright HMM (see Fig. 5). Transition probabili-
the second connected component. ties from one state to its consecutive state were computed for
each training video, and then averaged. If we set one probabil-
ity to , the probability of remaining in the same state is then
For the description and the segmentation step, the aim was 1 . Output probabilities were computed as the probability of
to provide a robust and reproducible method for describing the making an observation in a specific state. Training videos were
ROIs that have been isolated by the segmentation step, and then applied to the supervised classification for extracting binary
to identify the ROI as an instrument or not using the descriptors. cues and output probabilities were obtained by manually count-
Given the fact that the goal was to detect instruments, we were ing the number of occurrences for each state. Then, given the
willing to extract scale and rotation invariant features. We used observations and the HMM structure, the Viterbi algorithm [31]
the same approach as that described in Section II-A3, with local identifies the most likely sequence of states. HMM training, like
description of features using a BVW approach, and finally a feature selection process, must be performed once only for each
supervised classification using the SVM. Fig. 4(a) and (b) shows learning database.
an example of the extraction of local features for the input image 7) Dynamic Time Warping: We used the DTW algorithm
of Fig. 3(a). If the same instrument appears with various scales [32] as a method to classify the image sequences in a super-
and orientations, we will be able to extract the same feature vised manner. DTW is a well-known algorithm used in many
points with the same descriptors. Using this approach, ROIs can areas (e.g., handwriting and online signature matching, gesture
be classified as belonging to an instrument or not. recognition, data mining, time series clustering, and signal pro-
5) Alternative Method: This approach can be seen as con- cessing). The aim of DTW is to compare two sequences X : =
ventional image classification. Each frame is represented by a (x1, x2,. . ., xN) of length N and Y : = (y1, y2,. . ., yM) of length
signature composed of global spatial features. The RGB and the M. These sequences may be discrete signals (time series) or,
HSV spaces, the co-occurrence matrix along with Haralick de- more generally, feature sequences sampled at equidistant points
scriptors [22], the spatial moments [23], and the discrete cosine in time. To compare two different features, one needs a local
transform [24] coefficients were all computed. Each signature cost measure, sometimes referred to as local distance measure,
was finally composed of 185 complementary features that had which is defined as a function. Frequently, it is simply defined
to be reduced by feature selection. We combined a filter and by the Euclidean distance. In our case, however, considering
a wrapper approach [25] using the method described by Mac that we are using binary vectors, we will use the Hamming dis-
and Kung [26] using the union of both results for improving tance, well adapted for this type of signature. In other words, the
the final selection. The recursive feature elimination SVM [27] DTW algorithm finds an optimal match between two sequences
was chosen for the wrapper method and the mutual information of feature vectors that allows for stretched and compressed sec-
(MI) [28] was chosen for the filter method. Finally, 40 features tions of the sequence. To compare each surgery, we created an
were kept, and a SVM was applied to extract the desired visual average surgery based on the learning dataset using the method
cue. This particular procedure has been presented and validated described by Wang and Gasser [33]. Each query surgery is first
in Lays et al. [29]. processed in order to extract semantic information, and then the
Once every visual cue of a particular surgery has been defined sequence of image signature is introduced in the DTW algorithm
and detected, creating a semantic signature composed of binary to be compared to the average surgery. Once warped, the phases
values for each frame, the sequences of frame signatures (i.e., of the average surgery are transposed to the unknown surgery in
the time series) must be classified using appropriate methods. a supervised way. Additionally, global constraints (also known
970 IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 59, NO. 4, APRIL 2012
C. Validation Studies
Initial indexing was performed by surgeons for each video,
by defining the phases transitions, along with all visual cues
for each video. With this labeled video database, we were able
to evaluate both aspects of our framework, i.e., detection of
the different visual cues and the global recognition rate of the
entire framework. From each video, we randomly extracted 100
Fig. 6. Typical digital microscope frames for the 12 surgical phases: 1- frames in order to create the image database, finally composed
preparation, 2-betadine injection, 3-lateral corneal incision, 4-principal corneal of 2000 labeled images, each being associated with one video.
incision, 5-viscoelastic injection, 6-capsulorhexis, 7-phacoemulsification, 8- This image database was used for the assessment of visual cue
cortical aspiration of the big pieces of the lens, 9-cortical aspiration of the
remanescent lens, 10-expansion of the principal incision, 11-implantation of detection, whereas the videos, along with their corresponding
the artificial IOL, 12-adjustment of the IOL+ wound sealing. frames from the image database, were used to assess the entire
framework.
One step of our procedure did not require any training stage:
as windowing functions) can be added to the conventional algo-
the preliminary step of pupil segmentation. This step was simply
rithm in order to constrain the indices of the warping path. With
validated over the entire video database by testing each frame
this method, the path is not allowed to fall within the constraints
of each video. During this validation, a pupil was considered
window. For surgery modeling, we used the Itakura parallelo-
correctly segmented if and only if the circle found by the Hough
gram [34]. This prevents the warping path from straying too far
transform precisely matched the pupil. A percentage was then
away from the diagonal path.
obtained corresponding to the accuracy of the segmentation.
The second aspect of our framework that was assessed was the
B. Data
recognition of all visual cues. Independently, for knife recogni-
1) Video Dataset: Our framework was evaluated on cataract tion, the training stage was performed using manually selected
surgeries (eye surgeries). The principle of cataract surgery is to positive and negative images for better object training. For the
remove the natural lens of the eye and insert an artificial one validation, 1000 test images were used and global recognition
(referred as an IntraOcular Lens, or IOL) in order to restore the accuracy was computed. The four other visual cues classifiers
lens transparency. In this project, 20 cataract surgeries from the were the assessed through tenfold cross-validation studies. The
University Hospital of Munich were included (with mean sur- image database was, therefore, divided into ten subsets, ran-
gical time of 15 min). Three different surgeons performed these domly selected from the 20 videos. Nine were used for training
procedures. All videos were recorded using the OPMI Lumera while the prediction was made on the tenth subset. One test
surgical microscope (Carl Zeiss) with an initial resolution of subset was consequently composed of frames from two videos,
720 576 at 25 f/s. Considering the goal of recognizing only while the validation sets was composed of the rest of the frames
high-level tasks surgical tasks, each video was down-sampled from the 18 other videos. This procedure was repeated ten times
to 1 f/s. Original frames were also spatially down-sampled by a and results were averaged. After evaluating each classifier, we
factor 4 with a 5 5 Gaussian kernel. Twelve surgical phases validated their use compared to a traditional image-based classi-
were defined (see Fig. 6.). fier, i.e., compared to feature extraction, selection, and classifi-
2) Definition of the Visual Cues: Six pieces of binary vi- cation as performed by the conventional classifier. Additionally,
sual cues were chosen for discriminating the surgical phases. we also compared the four local keypoint detectors presented in
The pupil color range, defined as being orange or black, was ex- Section II-A3, along with the optimal number of visual words,
tracted using a preliminary segmentation of the pupil along with for both the texture-oriented classifier and the detection of in-
a color histogram analysis. Also analyzing only the pupil after strument presence. This validation, performed under the same
the segmentation step, the global aspect of the cataract (defined conditions as the detection of the visual cues, was necessary to
as parceled out or not) was recognized using the BVW approach optimize the detection of texture and shape-oriented information
with local spatial descriptors. The presence of antiseptic, recog- by the BVW approach.
nizable by virtue of its specific color, was detected using color Finally, we evaluated the global framework, including visual
histogram analysis, but on the entire image. Concerning the de- cue recognition and the time series analysis with the same type
tection of surgical instruments, only one had a characteristic of cross validation. Similarly to the visual cues recognition,
shape, the knife. We trained a Haar classifier using 2000 nega- at each stage, 18 videos (and their corresponding frames from
tive images and 500 positive images for its detection. All other the image database) were used for training and recognitions
instruments have very similar shapes and are very difficult to were made on the two others. For this assessment, the criterion
categorize. For this reason, we chose to detect the presence of an chosen was the frequency recognition rate (FRR), defined as the
LALYS et al.: FRAMEWORK FOR THE RECOGNITION OF HIGH-LEVEL SURGICAL TASKS FROM VIDEO IMAGES FOR CATARACT SURGERIES 971
Fig. 7. BVW validation studies comparison of accuracies with different number of visual words and different keypoints detectors. (a) Detection of the instruments
presence (b) Recognition of the cataract aspect.
TABLE III
MEAN ACCURACY (STANDARD DEVIATION) FOR THE RECOGNITION OF THE SIX BINARY VISUAL CUES
TABLE IV
MEAN, MINIMUM, AND MAXIMUM FER OF THE DTW
Fig. 8. Distance map of two surgeries and dedicated warping path using the Itakura constraint (up), along with transposition of the surgical phases (middle), and
the visual cues detected by the system (down).
LALYS et al.: FRAMEWORK FOR THE RECOGNITION OF HIGH-LEVEL SURGICAL TASKS FROM VIDEO IMAGES FOR CATARACT SURGERIES 975
In particular, DTW captures the sequentiality of surgical cal phases of each procedure for easy browsing. We could also
phases and is well adapted to this type of detection. The cost imagine the creation of prefilled reports that would need to be
function between two surgical procedures with the same se- completed by surgeons.
quence of phases, but with phase time differences, will be very
low. The advantage is that it can accurately synchronize two sur- V. CONCLUSION
gical procedures by maximally reducing time differences. The The goal of context-aware assistance is to collect information
main limitation concerning the use of DTW, however, can be from the OR and automatically derive a model that can be used
seen phase sequence differences appear between two surgeries. for advancing CAS systems. To provide this type of assistance,
The warping path would not correctly synchronize the phases the current situation needs to be recognized at specific granular-
and errors would occur. In the context of cataract surgery, the ity levels. A new kind of input data is more and more used for
procedure is standardized and reproducible, justifying the very detecting such high-level tasks: the videos provided by existing
good results of the recognition. But we can imagine that for hardware in the OR. In this paper, our analysis is based exclu-
surgeries that are not completely standardized, DTW would not sively on microscope video data. We proposed a recognition
be adapted. In this case, HMM could be used by adding bridges system based on application-dependant image-based classifiers
between the different states of the model (the transition ma- and time series analysis, using either an HMM or DTW algo-
trix should be adapted in that case), allowing the sequence to rithm. Using this framework, we are now able to recognize the
be resumed and perform the same phase multiple times. The major surgical phases of every new procedure. We have vali-
state machine would not be a leftright structure but would in- dated this framework with cataract surgeries, where 12 phases
clude more complex possibilities with many bridges between were defined by an expert, as long as six visual cues, achieving a
states. As a drawback, the complexity of such HMMs could global recognition rate of around 94%. This recognition process,
completely affect the recognition accuracy. For each surgical using a new type of input data, appears to be a nonnegligible
procedure, the HMM structure should be created by minimiz- progression toward the construction of context-aware surgical
ing the possibilities of transitions from states to states not to systems. In future work, lower-level information, such as the
affect the classification phase. In the particular case of cataract surgeons gestures, will need to be detected in order to create
surgery, the results showed that the DTW algorithm was, not more robust multilayer architectures.
surprisingly, quite better than HMM. Compared to HMM, the
other limitation of the DTW algorithm is that it cannot be used REFERENCES
online, as the entire procedure is required to determine the op- [1] K. Cleary, H. Y. Chung, and S. K. Mun, OR 2020: The operating room of
timum path. For online applications, the HMM classification the future, Laparoendosc. Adv. Surg. Tech., vol. 15, no. 5, pp. 495500,
should be used. 2005.
[2] P. Jannin and X. Morandi, Surgical models for computer-assisted neuro-
surgery, Neuroimage, vol. 37, no. 3, pp. 783791, 2007.
[3] B. Bhatia, T. Oates, Y. Xiao, and P. Hu, Real-time identification of oper-
E. Use in Clinical Routine ating room state from video, in Proc. AAAI, 2007, pp. 17611766.
[4] N. Padoy, T. Blum, H. Feuner, M. O. Berger, and N. Navab, On-line
The automatic recognition of surgical phases is very useful for recognition of surgical activity for monitoring in the operating room, in
context-awareness applications. For instance, there is a need for Procs 20th Conf. Innovative Appl. Artif. Intell., 2008, p. 7.
an analysis methodology specifying which kind of information [5] S. Speidel, G. Sudra, J. Senemaud, M. Drentschew, B. P. Muller-stich,
C. Gun, and R. Dillmann, Situation modeling and situation recognition
needs to be displayed for the surgeons current task. The use of for a context-aware augmented reality system, Progression Biomed. Opt.
videos, such as endoscope or microscope videos allows automat- Imag., vol. 9, no. 1, p. 35, 2008.
ing the surgeons assistance without altering the surgical routine. [6] B. Lo, A. Darzi, and G. Yang, Episode classification for the analysis of
tissue-instrument interaction with multiple visual cues, Int. Conf. Med.
Moreover, it can also support intraoperative decision making by Image Comput. Comput.-Assist. Intervention, pp. 230237, 2003.
comparing situations with previously recorded or known situ- [7] U. Klank, N. Padoy, H. Feussner, and N. Navab, Automatic feature
ations. This would result in a better sequence of activities and generation in endoscopic images, Int. J. Comput. Assist. Radiol. Surg.,
vol. 3, no. 34, pp. 331339, 2008.
improved anticipation of possible adverse events, which would, [8] T. Blum, H. Feussner, and N. Navab, Modeling and segmentation of
on the one hand optimize the surgery, and on the other hand surgical workflow from laparoscopic video, in Proc. Int. Conf. Med.
improve patient safety. Because of the time-series methods used Image Comput. Comput.-Assist. Interv., 2010, vol. 13, pp. 400107.
[9] S. Voros and G. D. Hager, Towards real-time tool-tissue interaction
in the framework that are not fully online algorithms, the online detection in robotically assisted laparoscopy, in Proc. Biomed. Robot.
use of the system remains a long-term objective. In its present Biomechatron., 2008, pp. 562567.
form, the computation time of the recognition process for one [10] C. E. Reiley and G. D. Hager, Decomposition of robotic surgical tasks:
An analysis of subtasks and their correlation to skill, presented at the
frame (visual cues detection + DTW/HMM classification) was MICCAI Workshop, London, 2009.
evaluated to around 3 s on a standard 2-GHz computer. It could [11] F. Lalys, L. Riffaud, X. Morandi, and P. Jannin, Surgical phases detection
be introduced into clinical routine for postoperative video in- from microscope videos by combining SVM and HMM, Med. Comp
Vision Workshop, MICCAI, Beijing, 2010, p. 9.
dexation and creation of prefilled reports. Surgical videos are [12] M. Swain and D. Ballard, Color indexing, Int. J. Comput. Vision, vol. 7,
increasingly used for learning and teaching purposes, but sur- no. 1, pp. 1132, 1991.
geons often do not use them because of the huge amount of sur- [13] V. C. Hough, Machine analysis of bubble chamber pictures, in Proc. Int.
Conf. High Energy Accelerators Instrum., 1959.
gical videos. One can imagine a labeled database of videos with [14] A. Smeulders, M. Worrin, S. Santini, A. Gupta, and R. Jain, Content-
full and rapid access to all surgical phases for easy browsing. based image retrieval at the end of the early years, IEEE Trans. Pattern
The video database created would contain the relevant surgi- Anal. Mach. Intell., vol. 22, no. 12, pp. 13491380, Dec. 2000.
976 IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 59, NO. 4, APRIL 2012
[15] D. G. Lowe, Object recognition from scale-invariant features, in Proc. [27] I. Guyon, J. Weston, S. Barhill, and V. Vapnik, Gene selection for cancer
Int. Conf. Comput. Vision, 1999, vol. 2, pp. 11501157. classification using support vector machine, Mach. Learning, vol. 46,
[16] H. Bay, T. Tuytelaars, and Luc Van Gool, SURF: Speeded up robust pp. 389422, 2002.
features, Computer VisionEur. Conf. Comput. Vision, pp. 404417, [28] R. W. Hamming, Coding and Information Theory: Prentice-Hall Inc, 1980.
2006. [29] F. Lalys, L. Riffaud, X. Morandi, and P. Jannin, Automatic phases recog-
[17] C. Harris and M. Stephens, A combined corner and edge detector, in nition in pituitary surgeries by microscope images classification, 1st
Proc. Alvey Vision Conf., 1988, pp. 147151. Int. Conf. Inform. Proc. Comp. Assist. Interv., Geneva, Switzerland, 2010,
[18] M. Agrawal and K. Konolige, CenSurE: Center surround extremas for pp. 3444.
realtime feature detection and matching, Eur. Conf. Comput. Vision, [30] L. R. Rabiner, A tutorial on hidden Markov models and selected ap-
vol. 5305, pp. 102115, 2008. plications in speech recognition, in Proc. IEEE, 1989, vol. 77, no. 2,
[19] P. Viola and M. Jones, Rapid real-time face detection, Int. J. Comput. pp. 257286.
Vision, pp. 137154, 2004. [31] A. Viterbi, Errors bounds for convolutional codes, IEEE Trans. Infor-
[20] Y. Freund and R. E. Schapire, A decision-theoretic generalization of mat. Theory, vol. 13, no. 2, pp. 260269, Apr. 1967.
on-line learning and an application to boosting, in Proc. 2nd Eur. Conf. [32] E. J. Keogh and M. J. Pazzani, An enhanced representation of time series
Comput. Learning Theory, 1995. which allows fast and accurate classification, clustering and relevance
[21] C. P. Papageorgiou, M. Oren, and T. Poggio, A general framework for feedback, in Proc. Prediction Future: AI Approaches Time-Ser. Probl.,
object detection, in Proc. Int. Conf. Comput. Vision, 1998, pp. 555562. 1998, pp. 4451.
[22] R. M. Haralick, K. Shanmugam, and I. Dinstein, Textural features for [33] K. Wang and T. Gasser, Alignment of curves by dynamic time warping,
image classification, IEEE Trans. Syst., Man, Cybern., vol. 3, no. 6, Ann. Statistics, vol. 25, no. 3, pp. 12511276, 1997.
pp. 61621, Nov. 1973. [34] V. Niennattrakkul and C. A. Ratanamahatana, Learning DTW global
[23] M. Hu, Visual pattern recognition by moment invariants, Trans. Inf. constraint for time series classification, Artif. Intell. Papers, 1999.
Theory, vol. 8, no. 2, pp. 7987, 1962.
[24] N. Ahmed, T. Natarajan, and R. Rao, Discrete cosine transform, IEEE
Trans. Comp., vol. C-23, no. 1, pp. 9093, Jan. 1974.
[25] R. O. Duda and P. E. Hart, Pattern Classification and Scene Analysis.
New York: Wiley, 1973.
[26] M. W. Mak and S. Y. Kung, Fusion of feature selection methods for
pairwise scoring SVM, Neurocomputing, vol. 71, pp. 31043113, 2008. Authors photographs and biographies not available at the time of publication.