Escolar Documentos
Profissional Documentos
Cultura Documentos
Longitudinal Viewer
January 23rd, 2019
Oreoluwa Adesina
Alper Anik
Emmanuel Shedu
Gabriel Sinclair
Austin Wilson
Keming Xu
from the
Center for Leadership Education, Whiting School of Engineering
1
Contents
Executive Summary .......................................................................................................................................................3
Internal Analysis ....................................................................................................................................................... 3
Market and Customer Analysis ................................................................................................................................. 3
Competitor Analysis & Deployment Considerations ................................................................................................ 3
Introduction ................................................................................................................................................................... 3
Internal Analysis ............................................................................................................................................................4
Heuristic Evaluation of User Experience .................................................................................................................. 4
Recommendations for Usability Testing ...................................................................................................................6
Physiological Measures in Usability Testing ............................................................................................................7
Market and Customer Analysis ..................................................................................................................................... 7
Target Market & Customers ...................................................................................................................................... 8
Prospective Market & Customers.............................................................................................................................. 9
Competitive Analysis .................................................................................................................................................. 10
MS BioScreen (University of California San Francisco) ........................................................................................ 10
MS Mosaic (Duke University) ................................................................................................................................ 11
AnamneVis (Stony Brook University) .................................................................................................................... 11
Vie-Visu (University of Vienna) ............................................................................................................................. 11
VISITORS [VISualizatIon of Time-Oriented RecordS] (Ben-Gurion University of the Negev) ........................... 11
Lifelines2 (University of Maryland) ....................................................................................................................... 12
EventFlow (University of Maryland) ...................................................................................................................... 12
Dartmouth Atlas Project (Dartmouth College) ........................................................................................................ 12
Key Features ................................................................................................................................................................ 12
Deployment Considerations......................................................................................................................................... 13
Annotated Bibliography .............................................................................................................................................. 14
Visualization - General............................................................................................................................................ 14
Visualization - Patient-Oriented .............................................................................................................................. 15
Visualization - Specific Tools ................................................................................................................................. 15
Evaluation - User-Based.......................................................................................................................................... 17
Evaluation - Heuristic.............................................................................................................................................. 18
Evaluation - Physiological ...................................................................................................................................... 19
Other Works Cited ....................................................................................................................................................... 20
Appendix 1: Patient-Neurologist Barriers to Communication ..................................................................................... 22
Appendix 2: Heuristic Evaluation................................................................................................................................ 22
Appendix 3: 4: ........................................................................................................................................................23
Appendix 5: American Academy of Neurology Survey Including (i) US Neurologists by Subspecialty, (ii)
Neurology Member Type ............................................................................................................................................ 23
Appendix 6: Price of Commercial Visualization Softwares ........................................................................................ 24
Appendix 7: Additional Information for Serviceable Addressable Market ................................................................. 24
Appendix 8: Top 16 NIH Funded Institutions in US with High Neurologist Populations........................................... 25
Appendix 9: Countries with the Highest Prevalence of MS ........................................................................................ 25
Appendix 10: Additional Information on Researcher Spending & Populations (Heart Disease & Prostate Cancer) .. 25
Appendix 11: AnamneVis Hierarchical Layout .......................................................................................................... 26
2
Executive Summary
The Multiple Sclerosis (MS) Longitudinal Viewer visualization tool receives medical
information from an Electronic Medical Record (EMR) system and presents it in an easily
digestible visual format. This provides multiple benefits over the traditional EMR interface: it
identifies symptoms faster, improves effectiveness of the treatment process, and decreases the
medical institution’s expenses.
Internal Analysis
The performance of the MS Longitudinal Viewer tool can be analyzed from two
standpoints: heuristic and practical. In this report, we conduct our own heuristic analysis,
comparing the tool’s performance to a set of guidelines defined by usability researchers; we also
recommend strategies for practical usability testing with sample users in the future. We present a
set of problems identified in heuristic evaluation, sample tasks for practical usability testing, and
the possibility of heart rate variability as a metric for assessing usability in some contexts.
We also identify prostate cancer and heart diseases, such as hypertrophic cardiomyopathy
(HCM) and arrhythmogenic right ventricular cardiomyopathy (ARVC) as ideal market spaces for
the MS Longitudinal Viewer to serve in the future.
Introduction
Today's extensive medical information systems have increased the amount and
availability of general medical knowledge and patient-specific data. And yet, information gaps at
the point-of-care are widening; clinicians are under increased pressure to synthesize best-
evidence, review more patient data and complete more clinical tasks in less time (Lesselroth &
Pieczkiewicz, 2011). In a 2014 survey of MS neurologists (Appendix 1), 47% of these
neurologists said that they did not have enough time to discuss all of their patients’ concerns.
While EMR systems have enhanced physicians’ productivity, human interface with EMR
systems has been hindered. In order to illustrate, 37% of physicians in a previous study reported
that interacting with their EMR databases was too time consuming (Rind et al., 2013). Another
3
study showed that 68% of physician time was spent on EMR documentation and paperwork and
that EMRs were designed as billing systems, not for patient care (“Our Story,” 2018).
A data visualization tool can leverage large, complicated datasets and enable physicians
to more rapidly contextualize relationships. For instance, Huber, Krishnaraj, Monaghan, &
Gaskin (2018) reported that by using Tableau, they created a clinical dashboard to display data
that allowed clinicians to analyze data quickly and to identify trends more rapidly. A data
visualization tool also enables clinicians and researchers to track a patient’s disease progress
over time and compare the individual’s trajectory to that of a reference group of similar patients
(Onukwugha, Plaisant, & Shneiderman, 2016). Further, a data visualization tool enables
researchers to look at population data in novel ways, identify extreme outliers, and stratify
subgroups of people based upon data from health records, genomic tests, imaging, and disease
progression (Haynes, Yao, McDonald, Sahota, & Ackloo, 2008).
The economic benefit of visualization software is evidenced at Massachusetts General
Hospital, where doctors and nurses used Tableau to access and view data that enabled them to
reduce rates of hospital-acquired infections (Erler & Ohmann, 2015). Ultimately, they reduced
catheter-related urinary tract infections by 85% (ibid). Piedmont Healthcare in North Georgia
leveraged Tableau to help them reorganize physician schedules, freeing physicians to focus all
their attention on one patient-centered activity at a time (Heimer, 2018). They reduced heart
failure readmissions by 10% and heart attack readmission by 12% (ibid). In total, they saved
$2M annually and increased patient satisfaction 7% (ibid).
Internal Analysis
To better understand the market prospects for the MS Longitudinal Viewer tool, we
recommend analyzing its functionality in two standard ways: heuristic evaluation and usability
testing. We have conducted a heuristic evaluation ourselves, though we recommend repeating it
throughout the development process with evaluators of varying expertise. We have also
developed a framework and guidelines for future usability testing.
A heuristic evaluation entails a comparison of the tool’s user experience to an accepted
set of heuristics for optimal usability; this comparison is usually conducted by experts in either
usability or in the field of the tool’s use (“Heuristic Evaluations,” 2013). While the heuristic
evaluation can identify sources or categories of issues that users may experience, the problems
identified in heuristic evaluation often differ from those identified in practical usability testing,
so the two cannot be substituted for each other (ibid).
Usability testing is an evaluation of the tool’s performance with representative users and
tasks. Participants with expertise levels similar to the intended end users are assigned to
complete typical tasks, and their success rates, speed, satisfaction, and any problems encountered
are recorded by observers (“Usability Testing,” 2013). Additionally, studies have shown that
certain physiological metrics, such as changes in galvanic skin resistance or heart rate, may mark
the occurrence of problems that test users are not even consciously aware of (Wilson & Sasse,
2000). Therefore, we recommend a usability testing scheme that incorporates elements of both
human observation and biometrics.
4
Mantra: “Overview first, zoom and filter, then details on demand” (Shneiderman, 2003). These
types of heuristic sets rely heavily on the expertise of evaluators to impute contextual meaning
and develop potential problem scenarios. On the other hand, some sets of heuristics are highly
rigorous and detailed; for instance, Pierotti’s expansion on the Nielsen heuristics takes the form
of a checklist totaling hundreds of questions (Tarrell et al., 2014). Virtually anyone, regardless of
prior knowledge, could explore an interface and analyze it using this checklist; however,
applying it to particularly specialized visualizations may lead to ignoring usability problems of
which the checklist did not conceive.
We have selected a set of heuristics that falls somewhere between these extremes and that
is supported by a thorough methodology and practical testing. Forsell and Johansson (2010)
conducted a meta-analysis of six common sets of heuristics by presenting study participants with
a list of known usability problems and asking them to classify how well each was explained by
each heuristic in the six sets. From this study, they compiled a set of ten heuristics that generated
the broadest coverage of usability issues (Table 1). This set of heuristics has since been
incorporated in further meta-analyses (Tarrell et al., 2014; Oliveira & Silva, 2017; Santos, Silva,
& Dias, 2018) and studied on its own (Väätäjä et al., 2016); its usage is well supported in this
context.
Heuristic Explanation
Information coding Visual elements (icons, colors, etc.) map intuitively to the data they convey.
Minimal actions The fewest possible user actions are needed to accomplish tasks.
Goals may be accomplished in multiple ways; interface allows customization to the
Flexibility
user’s workflow and requirements.
Orientation and help Task support, additional information, and undo/redo are available.
Spatial organization Space is used efficiently; visual layout supports user understanding.
Consistency Similar design indicates similarity and different design indicates difference.
Recognition rather than recall Memorization on the user’s part is minimized.
Prompting User is directed to all possible functions when multiple options are available.
Remove the extraneous Extra information or visual elements do not obscure the needed data.
Data set reduction Features for data set reduction (e.g. filtering) are accessible and efficient.
Table 1: Summary of data visualization usability heuristics as refined by Forsell and Johansson (2010).
5
low-level and specific heuristics become appropriate (Zuk, Schlesier, Neumann, Hancock, &
Carpendale, 2006). Additionally, we recommend that these heuristic evaluations be conducted by
both experts in data visualization as well as expert users (i.e. clinicians who are thoroughly
familiar with the data being presented) as this has been shown to be more effective than
assessments by single experts (Lin, Guerguerian, & Laussen, 2015).
Users’ responses to each prompt are converted to a numerical score, which can then be
normalized to produce a percentile ranking of the interface usability (ibid). This scale covers the
attributes of usability as defined by a number of leaders in the field of usability research:
learnability, efficiency, correctness, memorability, and subjective satisfaction (Bruno & Al-
Qaimari, 2004).
However, a person’s responses to a Likert scale may not always correlate with the tool’s
actual performance (Douven, 2017). Therefore, we also recommend a study of representative
tasks and users. This study should comprise both quantitative measures - speed and correctness
of task completion - as well as qualitative observations of user satisfaction and any faults
encountered. Ensure the tasks are not overly specific so as to prevent handing out clues to the
participant, as this will inevitably taint results. The tasks should be open-ended such that the goal
is not immediately clear to the participant. To this end, we have created tasks to measure
effectiveness, efficiency, satisfaction, and error rate, which will provide an assessment of the UI
's usability.
1. Effectiveness
Objective: How often does the clinician refer to the MS Longitudinal Viewer?
Task: No questions or specific task
Observation: Observe the user, note if the clinician generally uses the MS Longitudinal Viewer or opts to
use the Hospital's traditional EMR system.
2. Efficiency
Objective: How intuitive is the UI?
Task 1: Ask a first time user to open up relapses
Observation: Observe the user, note how long it takes for the first-time user to find that
information.
Objective: How learnable is the UI, how difficult is it to learn?
Task 2: Ask user to perform one task every week
Observation: Observe the time difference it takes to complete the task each week
6
3. Satisfaction
Objective: What is the user's experience with the UI overall?
Task: Anonymous survey of the clinicians time with the UI
Observation: Statistically analyze the results of the survey and infer what needs to be changed to improve
satisfaction
4. Error rate
Objective: How many errors are made and how can tweaking the UI fix it?
Task: Assign a new task to a regular user.
Observation: Note how many errors are made trying to perform this task, and note how the UI can be
modified to reduce the errors
Although the user behavior and interaction might change with the premise of observation
(Sonderegger, 2009), the task-observation method combined with the System Usability Scale
across various users is a good practical indication of the usability of the tool. We recommend
controlling the trial in several ways: by separating trial users of the MS Longitudinal Viewer into
EMR-experienced and EMR-naive groups, in order to discern any connection between the
presentation of information in the standard EMR interface and the viewer; and by comparing task
performance on the MS Longitudinal Viewer with EMR-experienced and EMR-naive task
performance on the traditional EMR interface. These comparisons will not only indicate any
usability concerns in the interface, but also provide an estimate of time saved by using the MS
Longitudinal Viewer, which can then be incorporated into market and pricing strategies.
7
Addressable Market (TAM), Serviceable Addressable Market (SAM), and Initial Target Market
(ITM). In addition, we identified prospective diseases to benefit from longitudinal visualization
softwares and determined the potential market sizes of these spaces.
1. Keep the MS longitudinal viewer in house to benefit from its competitive advantage.
2. License the software to other institutions for clinical and research use.
To determine the US TAM, we first determined the number of potential customers for the
MS Longitudinal Viewer. To obtain this number, we analyzed a 2018 survey of neurologists,
approximating about 9,700 researchers and clinicians actively working in the MS space
(Appendix 5). Further, we averaged the prices of commercial visualization softwares to estimate
a $3,100 per user per year price tag for the MS Longitudinal Viewer software (Appendix 6).
Using these figures, we estimate a US TAM of $30M for the MS Longitudinal Viewer. However,
the softwares we considered in estimating price are off-the-shelf solutions not customized to the
EMR or to clinical data for specific diseases. Because of this, we expect the MS Longitudinal
Viewer’s tailored functionality may command a premium above our estimated price. Thus, we
believe these market size evaluations are conservative. Future considerations include accounting
for the value proposition of the MS Longitudinal Viewer as well as the spending propensities in
the clinical and research settings of MS. We recommend additional pricing and marketing
development to more accurately determine a price point and market size for the MS Longitudinal
Viewer.
To estimate the US SAM, the portion of the market that can theoretically be reached with
current technology, we considered the EMR compatibility of the MS Longitudinal Viewer.
Because the MS Longitudinal Viewer is only compatible with Epic EMRs and 54% of US
patient records are managed in Epic (Appendix 7), we make the rough estimate that 54% of the
market is reachable (Glaze, 2015). Therefore, we estimate a US SAM of $16M. Further, despite
Epic’s popular use within the US, they only service 2.5% of global patients (ibid). Most global
EMR demand is fulfilled by Cerner Corp, who are the first or second in market share in 90% of
global regions (Naughton, 2018). Thus, to serve demand internationally, the MS Longitudinal
Viewer will need to seek compatibility with other EMRs, most likely Cerner’s EMR.
Lastly, we cross-reference the top 16 NIH-funded institutions with the domestic
distribution of neurologists to identify an ITM of twelve prospective medical institutions (2018
Insights Report, 2018; Philippidis, 2018) (Appendix 8). Although the actual target market will
most likely include those in the network of the Center of Excellence for MS, we utilized our
potential scenario to calculate an ITM size of $2.4M. Additional consideration when defining a
target market may include the geographic epidemiology of MS. Studies suggest that MS is most
prevalent in the northernmost regions of the US (Sadovnick & Ebers, 1993; Dilokthornsakul et
al., 2016). Further, countries farther from the equator have higher rates of MS (Appendix 9). In
addition, given the physical presence of direct competitors in California (BioScreen), we may
want to first concentrate on eastern institutions.
8
Figure 1: MS Longitudinal Viewer TAM, SAM, and ITM.
Figure 2: Criteria for Selecting Product Development Areas (NCCDPHP, 2018; Gourraud, 2014).
To understand the size of these prospective market spaces we first estimated the number
of patients, clinicians, and researchers. In the US, there are about 651,000 cases of HCM and
9
162,000 cases of ARVC. In addition, about 84M people in the US suffer from some form of
heart disease (“Cardiovascular Disease Statistics,” n.d.). Although we could not segment the
number of cardiologists into those who treat HCM and ARVC, we did find that 22,000 active US
physicians clinically treat heart disease (Number of People…, 2016). Further, given that 1 in
every 9 men will be diagnosed with prostate cancer in their lifetime, nearly 18M men in the US
have or will develop prostate cancer (“Treating Prostate Cancer,” 2019; “Male to Female
Ratio…,” 2015). We estimate that 29,100 active physicians treat prostate cancer (Number of
People…, 2016; “Treating Prostate Cancer,” 2019). Moreover, to estimate the number of prostate
cancer and heart disease researchers, we analyzed the NIH’s research spending (Appendix 10).
Thus, we can approximate that there are 6,900 research and 28,900 medical (research & clinical)
professionals involved with heart disease and 1,300 research and 30,400 medical professionals
involved with prostate cancer.
Finally, we calculated the size of TAM and SAM for both heart disease and prostate
cancer. Utilizing the same price model as MS (Appendix 6), we estimate a TAM of $90M for
heart disease (Figure 3) and $94M for prostate cancer (Figure 4). Similar to MS, by accounting
for the softwares compatibility with the Epic EMR we approximate a SAM of $48M for heart
disease (Figure 3) and $52M for prostate cancer (Figure 4). We recommend further research on
these markets to determine their target addressable sizes.
Competitive Analysis
We reviewed products similar to the MS Longitudinal Viewer that are competitive in the
MS space or offer insight on important features required in a MS visualization tool.
10
their condition with respect to other patients. MS BioScreen currently offers three different
platforms:
1. Open MS BioScreen: It is available to any patient, caregiver or clinician with a web browser. It gives its
users the opportunity to enter data on their condition, obtain a richly contextualized, digestible and
actionable predictive output, free of commercial interest, and participate in a shared decision-making
process.
2. Weill BioScreen: This platform pulls data from many disparate sources, including the traditional EMR,
research studies, and patient surveys. Then it processes and visualizes the data in a single cohesive display.
3. NeuroShare: This system allows data to be seamlessly shared throughout a health system.
Of all the above platforms, Open MS BioScreen is the only one that has been launched.
11
system includes tools for retrieval, visualization, exploration and analysis of raw time-oriented
data and derived concepts for multiple patient records (Klimov, Shahar, & Taieb-Maimon,
2010). To derive meaningful interpretations from raw time-oriented data, VISITORS uses a
method known as knowledge-based temporal-abstraction. There are three main unique features
to their visual exploration research, whose combination distinguishes their approach from others,
they are treatment of multiple records, treatment of the temporal dimension as a first class
citizen, and that the user interface is based on the temporal-abstraction ontology, which enables
navigation and exploration of semantically related raw and abstract concepts.
Key Features
In this section, we highlight requirements that are essential for an efficient MS
visualization tool. Some of these features were inspired by the tools that were discussed above.
12
1. Single Sign-On User Authentication: Single sign-on (SSO) is a common enterprise authentication process
that gives a user access to multiple applications with one set of login credentials. SSO User Authentication
is important when dealing with applications that has a potential to grow.
2. String Search: A string search algorithm makes it easier to find topics within a complex program.
3. Timeline with (semantic) zoom and pan functionality: Semantic zoom allows objects to change their display
form or display additional information and panning allows for smooth movement of a viewing frame. It can
be used to get an overview and detailed information on patients’ longitudinal data.
4. Single-patient multivariate pattern visualization: Like the visual representation used in AnamneVis, a
holistic visualization of multiple variables for a patient allows the user to keep track of their MS patients’
progress very easily.
5. Multi-patient multivariate pattern visualization: A multivariate and multi-patient comparison can help
physicians and researchers link symptoms to specific diseases and find treatments that were successful in
helping other patients. Acquiring this data requires intelligent data acquisition, organization and
presentation.
We strongly believe that a tool meeting all the above requirements will enable clinicians to attain
both a holistic and detailed understanding of their patients. This will help offer diagnoses and
treatment plans more quickly and accurately. In addition to the application requirements needed
for clinical use, researchers in MS seeking to find new connections between new variables can
benefit from the following additional features:
1. Regional multivariate pattern visualization: MS’ pathogenesis is unknown but researchers believe that the
environment plays a big factor, so it will be important to collect and visualize health data for multiple
patients in multiple region. Consider a visualization like the Dartmouth Atlas project, which uses Medicare
data to provide comprehensive information and analysis about national, regional, local markets as well as
individual hospitals and their affiliated physicians.
2. Dynamic query filtering: A dynamic query filtering system enables researchers to compare different data
points and aid in discovering causes and new relationships between health variables.
Deployment Considerations
The MS Longitudinal Viewer uses HighCharts to visualize an already organized dataset.
It generates a linear graph using the dataset provided by Epic smartforms. Due to the MS
Longitudinal Viewer’s current state and possible threats from emergent health visualization
tools, we suggest that the MS Longitudinal Viewer be deployed and upgraded in multiple phases
until we can attain an unparalleled tool in the market space.
1. (Pre)Deployment Phase 1: Before deploying the longitudinal viewer, there are essential features it must
possess to make it market-worthy and secure for clinical use.
a. Single Sign-on User Authentication: We recommend the MS Longitudinal Viewer utilize a single
sign-on user authentication not only to prevent unauthorized users from gaining access to sensitive
information, but also to allow for the flexibility to upgrade the system.
b. Patient-Specific Zoom and Pan Functionality: We also recommend clinicians and researchers have
the capability to enter a patient’s name, medical record number (MRN) or Epic Identity (EID) to
access patient specific information. The visualized information should also be navigable via
semantic zoom and pan functionalities.
c. Support Line Communication: In order to get feedback on the performance of the application, we
recommend the MS Longitudinal Viewer include a help functionality that will not only help the
user understand the features provided in the tool but also provide a way to collect written reports,
feedback and suggestions from the users.
2. Deployment Phase 2: Starting in Phase 2, we recommend placing priority on debugging and upgrading to
user specifications and suggestions shared through the support communication line.
13
a. Holistic Single Patient Multivariate Pattern Visualization: We suggest creating a dashboard using
a single patient multivariate visualization similar to AnamneVis.
i. Each sector in the radial sunburst should represent a variable in the MS study.
ii. We also suggest using color to represent how well the patient is doing in each variable.
When a sector is clicked, it should display the corresponding information via a linear
graph.
b. String Search: To help the user get acquainted with the large amount of information that will be
displayed, we suggest adding a string search functionality.
3. Deployment Phase 3:
a. Multi-Patient Multivariate Visualization: In this phase, we recommend that the MS Longitudinal
Viewer should include data from patients with similar disease in its timeline view.
b. Regional Multi-Patient Multivariate Visualization: We suggest the MS Longitudinal Viewer
introduce a regional multi-patient multivariate information in order to give researchers the ability
to study the disease for a given region.
c. Dynamic Query Filtering: We also recommend adding a query filter system to give researchers the
flexibility to find new data connections.
4. Deployment Phase 4:
a. Machine Learning Algorithms: The goal of this phase is to use machine learning algorithms to
collect and organize data, and generate clusters or features that might not be easily detectable by
the naked eyes (we suggest using Principal Component Analysis or Linear Discriminant Analysis).
Annotated Bibliography
Visualization - General
1. West, V. L., Borland, D., & Hammond, W. E. (2014). Innovative information
visualization of electronic health record data: A systematic review. Journal of the
American Medical Informatics Association, 22(2), 330-339. doi:10.1136/amiajnl-2014-
002955
A review of the available literature 1996-2013 on innovative visualizations for medical
data, including both individual and “big data” multi-patient systems. Authors note the
specific challenges in visualization of EMR data, and conclude that few visualization
methods exist to adequately confront these challenges.
2. Aigner, W., Miksch, S., Muller, W., Schumann, H., & Tominski, C. (2008). Visual
Methods for Analyzing Time-Oriented Data. IEEE Transactions on Visualization and
Computer Graphics, 14(1), 47-60. doi:10.1109/tvcg.2007.70415
Analysis of graphical techniques specific to temporal data, including temporal
abstraction, principal component analysis, and data aggregation. Focuses on the utility
of user interaction and user-interest design.
3. Boyd, A. D., Young, C. D., Amatayakul, M., Dieter, M. G., & Pawola, L. M. (2017).
Developing Visual Thinking in the Electronic Health Record. Studies in Health
Technology and Informatics, 245, 308-312. doi:10.3233/978-1-61499-830-3-308
Overview of EMR downsides and historical utility in institutions, baseline documentation
(19 papers) of data visualization utilizing the EMR. Good visualization on correlations
between data, models, knowledge, and visualization.
4. Holzinger, A., Schwarz, M., Ofner, B., Jean-quartier, F., Calero-Valdez, A., Roecker, C.,
& Ziefle, M. (2014). Towards Interactive Visualization of Longitudinal Data to Support
Knowledge Discovery on Multi-touch Tablet Computers. Lecture Notes in Computer
Science, 124-137. doi:10.1007/978-3-319-10975-6_9
14
Longitudinal visualization overview, with considerations specific to touchscreen
displays.
5. Bui, A. A. T., & Hsu, W. (2009). Medical Data Visualization: Toward Integrated Clinical
Workstations. Medical Imaging Informatics, 139-193. doi:10.1007/978-1-4419-0385-3_4
Highly detailed analysis of the utility of visualization in medical settings, including types
of visualization, user modeling and assumptions, workflow, and integrated display.
Finishes with a section on patient-centric visualization and concerns of audience,
expectation, and access.
6. Hildebrand, C., Stausberg, J., Englmeier, K. H., & Kopanitsa, G. (2013). Visualization of
Medical Data Based on EHR Standards. Methods of Information in Medicine, 52(01), 43-
50. doi:10.3414/me12-01-0016
Defines goals for medical data visualization within EMRs and suggests considerations
for the development of a standard EMR data viewer, but concludes that it will be difficult
to overcome the “contradiction between a generic method and a flexible and user-
friendly data layout.”
Visualization - Patient-Oriented
1. Dolan, J. G., Veazie, P. J., & Russ, A. J. (2013). Development and initial evaluation of a
treatment decision dashboard. BMC Medical Informatics and Decision Making, 13(1).
doi:10.1186/1472-6947-13-51
Design and assessment of a visualization tool used to guide patient in choosing
treatment method. Includes patient risks of adverse reaction and drug interactions.
Discusses the benefit and risks of altering patients’ (and doctors’) cognitive load.
2. Faisal, S., Blandford, A., & Potts, H. W. (2013). Making sense of personal health
information: Challenges for information visualization. Health Informatics Journal,19(3),
198-217. doi:10.1177/1460458212465213
Outlines the current challenges facing medical data visualization, and how to better
optimize data visualization tools for both medical professionals and patients. Also points
out data that’s important to both patients and medical professionals.
3. Wågbø, H. D. (2014). The Patient Perspective: Utilizing Information Visualization to
Present Health Records to Patients (thesis). Norwegian University of Science and
Technology.
In depth analysis of patient’s perspectives towards gaining access to their EMR data and
if access to such data is beneficial in any way to the patient. If beneficial, how feasible is
using state of the art EMR visualization techniques to present this data to the patients.
15
Description of LifeLines, a simple and generalizable visualization tool for longitudinal
viewing of a person’s lifetime, with specific examples in legal and medical fields. Special
attention paid to the implications of visual design - colors, sizes, icon selection - as well
as user feedback.
3. Wang, T. D., Wongsuphasawat, K., Plaisant, C., & Shneiderman, B. (2010). Visual
information seeking in multiple electronic health records. International Conference on
Health Informatics, 46-55. doi:10.1145/1882992.1883001
User case study results for Lifelines2, a system for visualizing temporal categorical data
across multiple patient records. Contains detailed data on users’ interactions with the
system and develops a “process model” for the manner in which users explore data.
4. Wang, T. D., Wongsuphasawat, K., Plaisant, C., & Shneiderman, B. (2011). Extracting
Insights from Electronic Health Records: Case Studies, a Visual Analytics Process
Model, and Design Recommendations. Journal of Medical Systems, 35(5), 1135-1152.
doi:10.1007/s10916-011-9718-x Published follow-up to
“Visual information seeking in multiple electronic health records.”
5. Wongsuphasawat, K., Gómez, J. A., Plaisant, C., Wang, T. D., Taieb-Maimon, M., &
Shneiderman, B. (2011). LifeFlow: Visualizing an Overview of Event Sequences.
Conference on Human Factors in Computing Systems, 1747-1756.
doi:10.1145/1978942.1979196
Development of LifeFlow, a tool for summary and visualization of sequences across
multiple records using a hierarchical system inspired by “icicle” plots and phylogenetic
trees. Includes case studies in medical and transportation fields as well as a user
evaluation study and verbal feedback.
6. Ordonez, P., Oates, T., Lombardi, M. E., Hernandez, G., Holmes, K. W., Fackler, J., &
Lehmann, C. U. (2012). Visualization of multivariate time-series data in a neonatal ICU.
IBM Journal of Research and Development, 56(5), 7:1-7:12.
doi:10.1147/jrd.2012.2200431 Design and evaluation of a system for visually presenting
the progression of an individual patient in small-scale time - hours or days. Uses spider
graphs, rather than line graphs, to display information more compactly. Allows both
customizing (user selects bounds) and personalizing (bounds extrapolated from data)
display to each patient.
7. Widanagamaachchi, W., Livnat, Y., Bremer, P.-T., Duvall, S., & Pascucci, V. (2018).
Interactive Visualization and Exploration of Patient Progression in a Hospital Setting.
Retrieved from http://www.huduser.org/Datasets/IL/IL08/in_fy2008.pdfOverview of a
visualization and analysis tool to understand patient progression over time. This tool
stands out as its able to visualize and analyze group data. This allows medical
professionals to understand how a patient group is progressing
8. Kopanitsa, G., Veseli, H., & Yampolsky, V. (2015). Development, implementation and
evaluation of an information model for archetype based user responsive medical data
visualization. Journal of Biomedical Informatics, 55, 196-205.
doi:10.1016/j.jbi.2015.04.009
Conceptual framework for the development and evaluation of a visualization module in
the Avrora EMR. Includes diagrams depicting the criteria and questions answered by
evaluation methods as well as results: functionality (modeling efficiency, data
accessibility), efficiency (cognitive efficiency, doctors’ performance), and usability
(learnability).
16
9. Rind, A., Wang, T. D., Aigner, W., Miksch, S., Wongsuphasawat, K., Plaisant, C., &
Shneiderman, B. (2013). Interactive Information Visualization to Explore and Query
Electronic Health Records. Foundations and Trends in Human-Computer Interaction,
5(3), 207-298. doi:10.1561/1100000039
Comparison of 14 visualization tools, both individual and multi-patient, with
categorization of their available functionalities, as well as briefer analysis of built-in
visualizations in commercial EMR systems. Includes details on the unconventional glyph-
based VIE-VISU system.
10. Popow, C., Unterasinger, L., & Horn, W. (2001). Support for Fast Comprehension of
ICU Data: Visualization using Metaphor Graphics. Methods of Information in Medicine,
40(5), 421-424. doi:10.1055/s-0038-1634202
Development of the VIE-VISU small multiples visualization tool for NICU care.
11. Bade, R., Schlechtweg, S., & Miksch, S. (2004). Connecting time-oriented data and
information to a coherent interactive visualization. Conference on Human Factors in
Computing Systems. doi:10.1145/985692.985706
Prototype for Midgaard semantic zoom system.
Evaluation - User-Based
1. Sauro, J. (2011, February 2). Measuring Usability with the System Usability Scale (SUS).
Retrieved from https://measuringu.com/sus/
Overview of the standard System Usability Scale and how to interpret its results.
2. Pohl, M., Wiltner, S., Rind, A., Aigner, W., Miksch, S., Turic, T., & Drexler, F. (2011).
Patient Development at a Glance: An Evaluation of a Medical Data Visualization.
Lecture Notes in Computer Science, 292-299. doi:10.1007/978-3-642-23768-3_24
Overview of a user study of nine physicians using a longitudinal data viewer for diabetic
patients.
3. Santos, B. S., & Dillenseger, J. (2005). Quality evaluation in medical visualization: Some
issues and a taxonomy of methods. Medical Imaging: Visualization, Image-Guided
Procedures, and Display, 5744, 612-620. doi:10.1117/12.594549
Highly theoretical paper on the conceptual basis for evaluating data visualization, with
some consideration of the practical implications for how such evaluations could be
conducted. Core concepts are “level of information representation, types of visualization
evaluation, and evaluation methodologies.”Nykänen, P., Brender, J., Talmon, J., Keizer,
N. D., Rigby, M., Beuscart-Zephir, M., & Ammenwerth, E. (2011). Guideline for good
evaluation practice in health informatics (GEP-HI). International Journal of Medical
Informatics, 80(12), 815-827. doi:10.1016/j.ijmedinf.2011.08.004
Rigorous European guideline for every stage of a health informatics evaluation,
including a list of dozens of issues to be considered in each stage.
4. Cusack, C. M., Byrne, C. M., Hook, J. M., McGowan, J., Poon, E., & Zafar, A. (2009).
Health Information Technology Evaluation Toolkit: 2009 Update (U.S. Department of
Health and Human Services, Agency for Healthcare Research and Quality). Rockville,
MD.
United States government toolkit for conducting an evaluation of health IT tools step-by-
step. Walks through the basics of determining goals and measures of performance, survey
design, sources of data, and sample implementations.
17
Evaluation - Heuristic
1. Zuk, T., Schlesier, L., Neumann, P., Hancock, M. S., & Carpendale, S. (2006). Heuristics
for information visualization evaluation. Novel Evaluation Methods for Information
Visualization. doi:10.1145/1168149.1168162
Meta-analysis of several common heuristic sets for information visualization - Zuk &
Carpendale, Shneiderman, and Amar & Stasko - with discussion of the suitability of each
for different points in the design process, the supporting research, and progress towards
a more unified set of heuristics.
2. Shneiderman, B. (2003). The Eyes Have It: A Task by Data Type Taxonomy for
Information Visualizations. The Craft of Information Visualization, 364-371.
doi:10.1016/b978-155860915-0/50046-9
Classic framework of tasks and data types in order to understand the usage of a data
visualization. The origin of the visual information-seeking mantra “overview first, zoom
and filter, then details on demand.”
3. Forsell, C. (2012). Evaluation in Information Visualization: Heuristic Evaluation.
International Conference on Information Visualization, 1550-6037/12.
doi:10.1109/IV.2012.33
Analysis of heuristic evaluations and in-depth explanation of its characteristics. This
provides information that enables the user to use heuristic evaluations to evaluate
information visualization in the best way possible.
4. Forsell, C., & Johansson, J. (2010). An Heuristic Set for Evaluation in Information
Visualization. Conference on Advanced Visual Interfaces, 199-206.
Meta-analysis of six heuristic sets for information visualization based on how effectively
heuristics covered a list of common problems. Assesses coverage of each heuristic set as
well as refining a set of 10 excerpted heuristics that are deemed most effective.
5. Lin, Y. L., Guerguerian, A., & Laussen, P. (2015). Heuristic Evaluation of Data
Integration and Visualization Software Used for Continuous Monitoring to Support
Intensive Care: A Bedside Nurse's Perspective. Journal of Nursing & Care, 4(6).
doi:10.4172/2167-1168.1000300
Overview of a heuristic evaluation of T3 ICU monitoring displays by teams of critical
care nurses and usability experts.
6. Tarrell, A.E., Forsell, C., Fruhling, A. L., Grinstein, G., Borgo, R., & Scholtz, J. (2014).
Toward Visualization-Specific Heuristic Evaluation. Interdisciplinary Informatics
Faculty Proceedings & Presentations, 1. doi:10.1145/2669557.2669580
In-depth analysis of various heuristic evaluation methods; advantages, limitations, and
disadvantages. Also procures solutions and other evaluation methods that provide more
accurate, comprehensive, and community-accepted set of visualization-specific
heuristics.
7. Oliveira, M. R., & Silva, C. G. (2017). Adapting Heuristic Evaluation to Information
Visualization: A Method for Defining a Heuristic Set by Heuristic Grouping.
International Joint Conference on Computer Vision, Imaging and Computer Graphics
Theory and Applications, 225-232. doi:10.5220/0006133202250232Clusters 62 common
information visualization heuristics into a new set of 15 focus-distinct heuristics usable
for a heuristic evaluation.
18
8. Väätäjä, H., Varsaluoma, J., Heimonen, T., Tiitinen, K., Hakulinen, J., Turunen, M., . . .
Ihantola, P. (2016). Information Visualization Heuristics in Practical Expert Evaluation.
Novel Evaluation Methods for Visualization. doi:10.1145/2993901.2993918
Practical study and critique of the heuristics proposed by Forsell & Johansson, 2010 by
five expert participants.
9. Santos, B. S., Silva, S., & Dias, P. (2018). Heuristic Evaluation in Visualization: An
empirical study.
Novel Evaluation Methods for Visualization.Comparison of heuristic evaluations under
three heuristic sets - Nielsen, Forsell & Johansson, and Zuk & Carpendale - with a
usability study in order to discern which issues will be noted by either type of evaluation.
Contrary to the paper’s hypothesis, it was found that not all problems identified in
heuristic evaluations will be detected by users, even when study tasks are directed in such
a way as to expose users to the problems.
10. Gerhardt‐Powals, J. (1996). Cognitive engineering principles for enhancing human‐
computer performance. International Journal of Human-Computer Interaction, 8(2), 189-
211. doi:10.1080/10447319609526147
Development and test application of a set of ten “cognitive engineering principles,” i.e.
user-interface design heuristics, in a mock anti-submarine warfare task.
11. Scapin, D. L., & Bastien, J. M. (1997). Ergonomic criteria for evaluating the ergonomic
quality of interactive systems. Behaviour & Information Technology, 16(4-5), 220-231.
doi:10.1080/014492997119806
Design and assessment of a set of concrete heuristics for interaction with a data
visualization system; more about action than visual design.
Evaluation - Physiological
1. Qu, Q., Guo, F., & Duffy, V. G. (2017). Effective use of human physiological metrics to
evaluate website usability. Aslib Journal of Information Management,69(4), 370-388.
doi:10.1108/ajim-09-2016-0155
Chinese study evaluating the correlation of eye fixation duration, fixation count, blink
rate, and heart rate variability with satisfaction, efficiency, effectiveness, learnability,
and memorability of an interface. Found strong support for most hypothesized
connections.
2. Foglia, P., Prete, C. A., & Zanda, M. (2008). Relating GSR Signals to Traditional
Usability Metrics: Case Study with an anthropomorphic Web Assistant. Instrumentation
and Measurement Technology Conference. doi:10.1109/imtc.2008.4547339
Study evaluating the correlation of heart rate, respiration rate, and galvanic skin
resistance with ease of use and approval of a web interface. Found increased respiration
rate correlated with approval of the interface.
3. Hercegfi, K. (2011). Heart Rate Variability Monitoring during Human-Computer
Interaction. Acta Polytechnica Hungarica,8(5), 205-224.
Study of the correlation between mid-frequency power of heart rate variability with
mental effort; found tentative confirmation of the hypothesis. MF monitoring is
potentially very high-resolution in time (down to 6.2 seconds in this study) and valuable
for continuous monitoring during usability evaluation.
4. Wilson, G. M., & Sasse, M. A. (2000). Do Users Always Know What’s Good For Them?
Utilising Physiological Responses to Assess Media Quality. People and Computers,14,
19
327-339. doi:10.1007/978-1-4471-0515-2_22
Study evaluating galvanic skin resistance, heart rate, and blood volume pulse as
measurements of stress induced by low-frame-rate video tasks.
20
22. Koskie, B. (2018, June 20). Multiple Sclerosis: Facts, Statistics, and You (S. Kim MD, Ed.). Retrieved
January 23, 2019, from https://www.healthline.com/health/multiple-sclerosis/facts-statistics-infographic#1
23. Looker vs. Tableau: Pricing and Features Comparison. (2018, September 21). Retrieved January 23, 2019,
from https://www.betterbuys.com/bi/looker-vs-tableau/
24. Lorang, N. (2016, November 03). Let's Chart: Stop those lying line charts. Retrieved January 15, 2019,
from https://m.signalvnoise.com/lets-chart-stop-those-lying-line-charts-60020e299829
25. Luxner, L. (2017, November 20). Nearly 1 Million Americans Have Multiple Sclerosis, NMSS Prevalence
Study Finds. Multiple Sclerosis News Today. Retrieved January 23, 2019, from
https://multiplesclerosisnewstoday.com/2017/11/20/nearly-1-million-americans-have-multiple-sclerosis-
nmss-prevalence-study-finds/
26. Male to Female Ratio of the Total Population. (2015). Retrieved January 23, 2019, from
https://knoema.com/atlas/United-States-of-America/topics/Demographics/Population/Male-to-female-ratio
27. Naughton, Marc. (2018). HIMSS Investment Community Meeting. Presentation, Las Vegas. Retrieved
January 23, 2019 from https://cernercorporation.gcs-web.com/static-files/c92a1999-be90-4964-a6de-
fc0945c22280
28. National Center for Chronic Disease Prevention and Health Promotion (NCCDPHP). (2018, November 19).
Retrieved January 23, 2019, from https://www.cdc.gov/chronicdisease/about/index.htm
29. Number of People per Active Physician by Specialty (Rep.). (2016, April). Retrieved January 23, 2019,
from Association of American Medical Colleges website:
https://www.aamc.org/data/workforce/reports/458490/1-2-chart.html
30. Onukwugha, E., Plaisant, C., & Shneiderman, B. (2016). Data Visualization Tools for Investigating Health
Services Utilization Among Cancer Patients. Oncology Informatics,207-229. doi:10.1016/b978-0-12-
802115-6.00011-2
31. Peddie, J. (2019). Computer Graphics Software Market Worldwide Segments, 2013-2021. Retrieved
January 23, 2019, from https://www.statista.com/statistics/269250/computer-graphics-application-software-
market-volume-worldwide-by-segment/
32. Philippidis, A. (2018, June 4). Top 50 NIH-Funded Institutions of 2018. Genetic Engineering &
Biotechnology News. Retrieved January 23, 2019, from https://www.genengnews.com/a-lists/top-50-nih-
funded-institutions-of-2018/
33. Usability Testing. (2013, November 13). Retrieved January 14, 2019, from https://www.usability.gov/how-
to-and-tools/methods/usability-testing.html
34. Sadovnick, A. D., & Ebers, G. C. (1993). Epidemiology of Multiple Sclerosis: A Critical Overview.
Canadian Journal of Neurological Sciences,20(1), 17-29. doi:10.1017/s0317167100047351
35. Sonderegger, A., & Sauer, J. (2009). The influence of laboratory set-up in usability tests: Effects on user
performance, subjective ratings and physiological measures. Ergonomics, 52(11), 1350-1361.
doi:10.1080/00140130903067797
36. State of MS: Global Survey Fact Sheet (Rep. No. FCH-1009120). (2014). State of MS Consortium.
37. Straight Talk: Review of Sisense; The Pros and Cons. (2018, September 19). Retrieved January 23, 2019,
from https://www.yurbi.com/blog/straight-talk-review-of-sisense-the-pros-and-cons/
38. Treating Prostate Cancer. (2019). Retrieved January 23, 2019, from
https://www.cancer.org/cancer/prostate-cancer/treating.html
39. Vaidya, A. (2017, May 2). Epic, Cerner hold 50% of hospital EHR market share: 8 things to know.
Becker's Hospital Review. Retrieved January 23, 2019, from
https://www.beckershospitalreview.com/healthcare-information-technology/epic-cerner-hold-50-of-
hospital-ehr-market-share-8-things-to-know.html
40. Who Uses Epic? (2019). Retrieved January 23, 2019, from https://www.epic.com/community#NIH
41. Wood, L. (2018, January 29). Global $27.3 Billion Multiple Sclerosis Drugs Market 2017-2025. Retrieved
January 23, 2019, from https://www.businesswire.com/news/home/20180129005741/en/Global-27.3-
Billion-Multiple-Sclerosis-Drugs-Market
42. Yellowfin BI Pricing. (2019). Retrieved January 23, 2019, from
https://www.g2crowd.com/products/yellowfin-bi/pricing
43. Zhang, Z., Ahmed, F., Mittal, A., Ramakrishnan, I., Zhao, R., Viccellio, A., & Mueller, K. (2011).
AnamneVis: A Framework for the Visualization of Patient History and Medical Diagnostics Chains.
Proceedings of the IEEE VisWeek Workshop on Visual Analytics in Health Care.
21
Appendix 1: Patient-Neurologist Barriers to Communication
22
Stacked MRI icons move on hover Remove the extraneous
“No new lesions found” hover labels unnecessary when icon for no-
Remove the extraneous
lesion MRI is already distinct
Single icon for medication regimen implies one-time event; not Information coding, spatial
clear that colored graph sections represent medication regimen organization
Date format different between medication regimens and events Consistency
Spacing between rows of events is off, some overlap Spatial organization
Table 2: Usability problems identified.
Appendix 3: Appendix 4:
23
Appendix 5 (cont’d.)
(i) In the previous survey, 34.6% of neurologists listed general neurology as their primary
subspecialty and 4.2% listed Neuroimmunology and MS for their subspecialty. Since patients
with MS see both general neurologists and MS specialists, this survey implies that up to 38.8%
of neurologists work with MS.
Source: (“Straight Talk…,” 2018; “How Much Does Zoho Analytics Cost?,” 2018; “Yellowfin BI Pricing,” 2019;
“Everything You Need to Know…,” 2018; “Looker vs. Tableau…,” 2018)
24
Appendix 8: Top 16 NIH Funded Institutions in US with High Neurologist Populations
25
Appendix 11: AnamneVis Hierarchical Layout
26