Você está na página 1de 42

Dissertation submitted to,

Project Title
ROLE OF ARTIFICIAL INTELLIGENCE IN HEALTH
CARE

BACHLER OF PHARMACY

Submitted By,
Mr. Patil sagar Ravindra
Mr. Patil Saurabh Dasharath
Miss. Patil Swati Rangrao

Under the guidance of


Mr. A.S.PATIL
Assistant Professor, Department of Pharmchemistry

R. C. Patel Institute Of Pharmaceutical Education and Research, Shirpur,


Dist-(M.S) ,India 425405 (2017-18).
CANDIDATE’S DECLARATION

We hereby certify that the work which is being presented in the


project entitled “Role of Artificial Intelligenet in Healthcare’’ in
fulfillment of the requirements for the award of degree of Bachelor of
Pharmacy at the R. C. Patel Institute of Pharmaceutical Education and
Research, Shirpur is an authentic record of our own work carried out
during the academic year 2017 - 18 under the supervision of
Mr.A.S.Patil.
The matter presented in this thesis has not been submitted by me in
any other University/ Institute for the award of any other degree.

Date: /03/2018

Place: Shirpur Mr.Patil Sagar Ravindra


Mr.Patil Saurabh Dasharath
Miss.Patil Swati Rangrao
CERTIFICATE

This is to certify that the work presented in the project entitled “Role of
Artificial Intelligenet in Healthcare’’ for the degree of Bachelor of
Pharmacy has been carried out by Mr. Patil Sagar Ravindra , Mr .Patil Saurabh
Dasharath, Miss. Patil Swati Rangrao under the supervision of Mr.A.S.Patil
at the R. C. Patel Institute of Pharmaceutical Education and Research, Shirpur.

Date: / /2018 Dr. S. J. Surana


Place: Shirpur
Principal
CERTIFICATE

This is to certify that the work presented in the project entitled “Role of
Artificial Intelligenet in Healthcare’’ for the degree of Bachelor of
Pharmacy has been carried out by Mr. Patil Sagar Ravindra ,

Mr .Patil Saurabh Dasharath, Miss. Patil Swati Rangrao


in the R. C. Patel institute of pharmaceutical Education and Research, Shirpur.
To the best of my knowledge and belief the project, embodies the work of the
candidate himself, has been duly been completed and fulfills the requirements of the
ordinance relating to the B. Pharm. degree of the university.

Date: / /2018
Mr.A.S.Patil
Place: Shirpur
Asst. Professor
Acknowledgement

We would like to express our sincere greatefulness to the people


who have helped us most throught our project. We are indebted to our
guide Mr .A .S. PATIL Assistant Professor , Department of
Pharmaceutics in R.C .Patel Institute Of Pharmaceutical Education
and Research, Shirpur ; for offering his focused guidance ,patient
supervision, and critical comments and as when required time. He has
been devoted to motivate us to accomplish this minor research
endeavor.

A speacial gratitude towards, Hon. Principal, R. C .Patel


Institute Of Pharmaceutical Education and Research,
Dr. S. J. Surana, and chairman, Shirpur, SES’s, for granting us the
required facilities ,and thorough support during graduation journy as
well as for the UG-project work.

We are always obligue to Dr .A .A. Shirkhedkar, vice-principal


and Head, Dept. of Pharmaceutical chemistry for their impeccable
guidance , continous support and advice which helped us to perform
the task better. We are very very greatful to almighty God, Inspiring
parents, Praiseworthy Teachears and , our friends, who guided us
throught the UG programme during during our fascinating glide,
also to those who were directly and indirectly involved; without
whom and their contant caring and loving support we would unable
to achieve this advancement & precious stage life.
Dedication

Dedicated with humality and reverence


to the fond memory of beloved
parent who encouraged
and flared passion in us to learn more
always

Thanks

Wish to thank
For their boundless
Patience and eternal understanding
During compltion of this project
INDEX

Sr. Title Page


NO. No.
1 INTRODUCTION .

2 HISTORY .

3 DEVICE OF AI IN HEALTH CARE.

4 SCOPE AI IN HEALTH .

5 ROLE AI HEALTH CARE

6 METHODS .

7 EMERGENCE OF AI & IT’S


SIGNIFICANCE.
8 BENEFITS OF AI IN HEALTH CARE

9 AI RISKS IN HEALTH CARE

10 CHALLEGNGES FOR AI IN HEALTH


CARE
11 APPLICATION.
12 CONCLSION
13 REFERENCES.
INTODUCTION

Artificial Intelligence (AI), where computers perform tasks that are usually assumed to require human
intelligence, is currently being discussed in nearly every domain of science and engineering. Major
scientific competitions like Image Net Large Scale Visual Recognition Challenges are providing
evidence that computers can achieve human-like competence in image recognition. AI has also enabled
significant progress in speech recognition and natural language processing . All of these advances open
questions about how such capabilities can support, or even enhance, human decision making in health
and health care. Two recent high profile research papers have demonstrated that AI can perform clinical
diagnostics on medical images at levels equal to experienced clinicians, at least in very specific
examples.

The promise of AI is tightly coupled to the availability of relevant data . In the health domains, there is
an abundance of data . However, the quality of, and accessibility to, these resources remai a significant
challenge in the United States. On one hand, health data has privacy issues associated with it, making
the collection and sharing of health data particularly cumbersome compared to other types of data. In
addition, health data are quite expensive to collect, for instance in the case of longitudinal studies and
clinical trials, so it tends to be tightly guarded once it is collected. Further, the lack of interoperability of
electronic health record systems impedes even the simplest of computational methods and the inability
to capture relevant social and environmental information in existing systems leaves a key set of
variables out of data streams for individual health

. At the same time, there is wide private-sector interest in AI in health data collection and applications
as illustrated from the numerous startups related to AI in health and health care (a partial list as of 2016
is captured in Figure . Most of the 106 listed startups are headquartered in the US. There are startups in
15 different countries, with the UK and Israel having the largest number of startups outside the US. The
two most popular topics, medical imaging & diagnostics and patient data & risk analytics, are a strong
focus in this report. However, another key focus of this report, the importance of environmental factors,
is less apparent in the startup activity show

Artificial intelligence
Artificial intelligence in healthcare defined as intelligence exhibited by machines, has many
applications in today's society. More specifically, it is Weak AI, the form of A.I. where programs are
developed to perform specific tasks, that is being utilized for a wide range of activities including
medical diagnosis, electronic trading, robot control, and remote sensing. AI has been used to develop
and advance numerous industries ,including .finance healthcare, education, transportation

X-ray of a hand, with automatic calculation of bone age by a computer software.

HISTORY

• Research in the 1960s and 1970s produced the first problem-solving program, or expert system,
known as Dendral. While it was designed for applications in organic chemistry, it provided the basis for
the subsequent system MYCIN, considered one of the most significant early uses of artificial
intelligence in medicine, MYCIN and other systems such as INTERNIST-1 and CASNET did not
achieve routine use by practitioners however., of network connectivity, as well as the recognition by
researchers and developers that AI systems in healthcare must be designed to accommodate the absence
of perfect data and build on the expertise of physician users. New approaches involving fuzzy set theory,
Bayesian networks and artificial neural networks, were created to reflect the evolved needs of intelligent
computing systems in healthcare.

• Medical and technological advancements occurring over this half-century period that have
simultaneously enabled the growth healthcare-related applications of AI include:

Improvements in computing power resulting in faster data collection and data processing Increased
volume and availability of health-related data from personal and healthcare-related devices Growth of
genomic sequencing databases Widespread implementation of electronic health record systems
Improvements in natural language processing and computer vision, enabling machines to replicate
human perceptual processes

Devises of AI in Healthcare

1.Morepen Gluco One BG 03 Kit with 100 Strips

Dr . Morepen Gluco One BG 03 Glucometer

measures concentration of blood glucose by self-testing for both professionaand home ues.
Treatment
Routine blood glucose (sugar) testing with Dr Morepenent program affects the blood glucose
level.
- Testing blood glucose frequently can help keep diabetes under control
- Dr Morepen BG03 strips require only a tiny drop of blood

Use:
Procedure to operate/ How to use:
- Place the blood on a disposable "test strip" that is inserted in the meter
- The test strip contains chemicals that react with glucose and gluco meter shows the result
- Some meters measure the amount of electricity that passes through the test strip. Others measure how
much light reflects from it

Indication:
Diabetes Mellitus

1. coice MMed MD300C2 Fingertip Pulse Oximeter


device to check patient blood-ox

Choice MMed MD300C2 Fingertip Pulse Oximeter is a very important and common medical ygen saturation
(SpO2) and pulse rate for hospital use and homecare.
Salient features of Choice MMed MD300C2 Fingertip Pulse Oximeter: Adjustable
brightness 6 display modes.
Real-time promptfor battery status.
2pcs AAA-size batteries; automatically power off.
SpO2 measurement range: 70% - 100%.
Resolution:1%
Measurement accuracy:70%,100%,±2%≤69%:
Pulserate measurement range:30-250bpm.
Resolution:1 bpm.
Measurement accuracy: 30 - 99bpm: ±2bpm 100 - 250bpm: ±2%.

2 .Social robots

The next technology trend is social or companion robotics. Social robots use artificial intelligence to
understand people and respond appropriately. Simple “robots” like Paro, the therapeutic seal, have been
around for many years and respond when petted or spoken to and have been used to reduce stress in
elderly patients.

Advances in natural language processing and social awareness algorithms have already begun to make
social robots dramatically more useful to consumers as companions or personal assistants. The Amazon
Echo digital assistant, Alexa, is a popular product that I would argue is one of the first social robots for
consumers. Jibo, which is supposed to hit the market this year, is a connected robotic personal assistant,
kind of like a cross between the echo, a robotic toy, and your computer.

Carnegie Mellon University’s Dr. Justine Cassell has developed the Socially-Aware Robot Assistant or
“SARA”, which interacts with people in a whole new way, personalizing the interaction and improving
task performance by relying on information about the relationship between the human user and virtual
assistant. Dr. Cassell explains that “AI is not a technology, it’s a technique for understanding people
and making machines act the way people do.”

3.ossmax HC700 Non-Contact Telephoto Thermometer

It provides a consistent and larger distance (within 10 cm) temperature measurement without skin
contact.
Salient features of Rossmax HC700 Non-Contact Telephoto Thermometer: Telephoto temperature
measurement distance within 10cm (3.94 inches).

Healthcare data
Before AI systems can be deployed in healthcare applications, they need to be ‘trained’ through data
that are generated from clinical activities, such as screening, diagnosis, treatment assignment and so on,
so that they can learn similar groups of subjects, associations between subject features and outcomes of
interest. These clinical data often exist in but not limited to the form of demographics, medical notes,
electronic recordings from medical devices, physical examinations and clinical laboratory and images.

Specifically, in the diagnosis stage, a substantial proportion of the AI literature analyses data from
diagnosis imaging, genetic testing and electro diagnosis (figure 1). For example, Jha and Topol
urged radiologists to adopt AI technologies when analysing diagnostic images that contain vast data
information. Li et al studied the uses of abnormal genetic expression in long non-coding RNAs to
diagnose gastric cancer. Shin et al developed an electro diagnosis support system for localising neural
injury.
Figure 1
The data types considered in the artificial intelligence artificial (AI) literature. The comparison is
obtained through searching the diagnosis techniques in the AI literature on the PubMed database.

In addition, physical examination notes and clinical laboratory results are the other two major data
sources (figure 1). We distinguish them with image, genetic and electrophysiological (EP) data because
they contain large portions of unstructured narrative texts, such as clinical notes, that are not directly
analysable. As a consequence, the corresponding AI applications focus on first converting the
unstructured text to machine-understandable electronic medical record (EMR). For example,
Karakülah et alused AI technologies to extract phenotypic features from case reports to enhance the
diagnosis accuracy of the congenital anomalies.

 AI devices
The above discussion suggests that AI devices mainly fall into two major categories. The first category
includes machine learning (ML) techniques that analyse structured data such as imaging, genetic and EP
data. In the medical applications, the ML procedures attempt to cluster patients’ traits, or infer the
probability of the disease outcomes. The second category includes natural language processing (NLP)
methods that extract information from unstructured data such as clinical notes/medical journals to
supplement and enrich structured medical data. The NLP procedures target at turning texts to machine-
readable structured data, which can then be analysed by ML techniques.

For better presentation, the flow chart in figure2 describes the road map from clinical data generation,
through NLP data enrichment and ML data analysis, to clinical decision making. We comment that the
road map starts and ends with clinical activities. As powerful as AI techniques can be, they have to be
motivated by clinical problems and be applied to assist clinical practice in the end.

Figure 2

The road map from clinical data generation to natural language processing data enrichment, to
machine learning data analysis, to clinical decision making. EMR, electronic medical record; EP,
electrophysiological.

Disease focus
Despite the increasingly rich AI literature in healthcare, the research mainly concentrates around a few
disease types: cancer, nervous system disease and cardiovascular disease (figure 3). We discuss
several cardiovascular disease (figure 3). We dil examples below.

Figure 3

The leading 10 disease types considered in the artificial intelligence (AI) literature. The first
vocabularies in the disease names are displayed. The comparison is obtained through searching
the disease types in the AI literature on PubMed.

1. Cancer: Somashekhar et al demonstrated that the IBM Watson for oncology would be a reliable AI
system for assisting the diagnosis of cancer through a double-blinded validation study .Esteva et
al analysed clinical images to identify skin cancer subtypes.

2. Neurology: Bouton et al developed an AI system to restore the control of movement in patients


with quadriplegia .Farina et al tested the power of an offline man/machine interface that uses the
discharge timings of spinal motor neurons to control upper-limb prostheses.

3. Cardiology: Dilsizian and Siegel discussed the potential application of the AI system to diagnose
the heart disease through cardiac image. Arterys recently received clearance from the US Food and Drug
Administration (FDA) to market its Arterys Cardio DL application, which uses AI to provide automated,
editable ventricle segmentations based on conventional cardiac MRI images.
The concentration around these three diseases is not completely unexpected. All three diseases are
leading causes of death; therefore, early diagnoses are crucial to prevent the deterioration of patients’
health status. Furthermore, early diagnoses can be potentially achieved through improving the analysis
procedures on imaging, genetic, EP or EMR, which is the strength of the AI system.

Besides the three major diseases, AI has been applied in other diseases as well. Two very recent
examples were Long et al, who analysed the ocular image data to diagnose congenital cataract
disease, and Gulshan et al, who detected referable diabetic retinopathy through the retinal fundus
photographs.

The rest of the paper is organised as follows. In section 2, we describe popular AI devices in ML and
NLP; the ML techniques are further grouped into classical techniques and the more recent deep learning.
Section 3 focuses on discussing AI applications in neurology, from the three aspects of early disease
prediction and diagnosis, treatment, outcome prediction and prognosis evaluation. We then conclude in
section 4 with some discussion about the future of AI in healthcare.

 The AI devices: ML and NLP

In this section, we review the AI devices (or techniques) that have been found useful in the medial
applications. We categorise them into three groups: the classical machine learning techniques, the more
recent deep learning techniques and the NLP methods.

Classical ML
ML constructs data analytical algorithms to extract features from data. Inputs to ML algorithms include
patient ‘traits’ and sometimes medical outcomes of interest. A patient’s traits commonly include
baseline data, such as age, gender, disease history and so on, and disease-specific data, such as
diagnostic imaging, gene expressions, EP test, physical examination results, clinical symptoms,
medication and so on. Besides the traits, patients’ medical outcomes are often collected in clinical
research. These include disease indicators, patient’s survival times and quantitative disease levels, for
example, tumour sizes. To fix ideas, we denote the jth trait of the ith patient by Xij , and the outcome of
interest by Yi .

Depending on whether to incorporate the outcomes, ML algorithms can be divided into two major
categories: unsupervised learning and supervised learning. Unsupervised learning is well known for
feature extraction, while supervised learning is suitable for predictive modelling via building some
relationships between the patient traits (as input) and the outcome of interest (as output). More recently,
semi supervised learning has been proposed as a hybrid between unsupervised learning and supervised
learning, which is suitable for scenarios where the outcome is missing for certain subjects. These three
types of learning are illustrated in figure 4.

Figure 4

Graphical illustration of unsupervised learning, supervised learning and semisupervised


learning.

Scope AI in Healthcre
The following questions will be posed to the JASON group for further refinement in collaboration with
all stakeholders.

1. AI Opportunities: a.
Ways to Improve Health and Health Care i. In what ways might artificial intelligence advance efforts to
improve individual health, health care (care of individuals), and community health (health status of sub-
populations)?

ii. What evidence exists regarding artificial intelligence’s relevance for health, health care, and
community health? What is the demonstrated state-of-the art in these areas?

iii. What are the most high-value areas (example, reducing the cost of expensive treatments, prevention
of mortality or morbidity in disproportionately affected populations, improvement in productivity due to
better health, or focusing on risk mitigation where the impacted population is large) where artificial
intelligence could be focused to contribute quickly and efficiently?

iv. How can the benefits of artificial intelligence applications be defined and assessed?

2. AI Considerations:

a. Technical i.
What are the considerations for the data sources needed to support the development of artificial
intelligence programs for health and health care?

For example,
what is the needed data quality, breadth, and depth necessary to support the deployment of appropriate
artificial intelligence technology for health and health care?

ii. How does research in computational, statistical, and data sciences need to advance in order for these
technologies to reach their fullest potential?

iii. What technology barriers may arise in the technology adoption associated with artificial intelligence
for health and health care?

i. What are the potential unintended consequences, including real or perceived dangers, of artificial
intelligence focused on improving health and health care?
ii. What are the potential risks of artificial intelligence inadvertently exacerbating health inequalities?

b. Workforce

i. What workforce changes may be needed to ensure effective broad-based adoption of data-rich
artificial intelligence applications?

3. AI Implementation a. Are there relevant projects of AI in individual health, community health and
health care that currently demonstrate the potential value of AI and feasibility of scale-up?

b. Depending on relevant projects to learn more, have other industries successfully utilized AI which
might translate well to individual health, community health, and health care? Are there similar or
dissimilar facets of implementation barriers that exist for AI in individual health, community health, and
health care?

10 Roles Artificial Intelligence Can Play in Healthcare

1. Patients who feel a little unwell or think they need medical advice will dial into a
telehealth service and talk to a nurse. Data on their condition and symptoms may be uploaded in
real time from a smart phone or smart sensors, and an artificially intelligent system will suggest
next steps to the nurse on the line. And by the way, smartphones will be used to regularly send
pictures or videos which a computer will read and recommend how to proceed.

2. Patients who feel sufficiently unwell will not go to a hospital urgent care department and
instead will mostly go to a conveniently located small clinic, probably in a local mall or chain
pharmacy. There the patient will be seen by a nurse practitioner who will be able to take into
account a patient’s entire medical history by pulling up a universally accessible, privacy
protected, electronic health record, or EHR.

3. Patients with chronic conditions will be cared for at home by visiting nurses and doctors
(matched by smart platforms and “Uber”-type technology) who can then call in as frequently as
necessary either in person or via tele health means. People who are not ambulatory at all will also
be able to be watched over by AI robots that also provide some basic care in situ.
4. Where a hospital is still needed, for say major surgery, these will be making extensive
use of technology, much of which will be available in every patient room (or portably deliverable
to it) like a mini ICU. AI will feature significantly in these rooms and will be blended with
human resources.

5. In-Patients (in hospitals, surgery centers, clinics, skilled nursing centers, hospices etc.)
will have multiple screens around them which can deliver tailored education by AI means and be
responsive to patient requests for feedback (by just using their voice as a command).

6. Human medical staffing ratios will be adjusted constantly according to the individual
patient’s need as determined by AI risk-monitoring and treatment algorithms and by adjusting
according to a continually updated electronic health record.

7. Most orders and notes from doctors will be entered into the EHR through natural
language voice recognition software. Each patient will control his or her own EHR, a digital
compendium of clinician-generated notes and data with patient-generated information and
preferences (all of which will be simply analyzed, charted and displayed as a patient wishes).

8. Patient Alerts will be calibrated to clearly distinguish life-threatening issues and


problems from minor conditions or ignorable symptoms.

9. Doctor’s efforts will be greatly assisted, especially when engaged in differential


diagnosis and evidence-based treatment and precision medicine practice by cognitive computing
systems like IBM’s Watson.

10. Artificial intelligence applied to cloud-based “Big Data” will assist clinicians by
comparing and contrasting individual patient’s characteristics with other patients in the database
with similar conditions in order to find the best possible diagnoses and solutions.

Methods AI in healthcare

Part 1: Psychological AI and Cognitive Modeling The Fable of Car World


1) Psychological AI with Novel Functionality

As an alternative to direct quantitative comparisons to psychological experimental data, a researcher


could focus just on creating a computer model of a cognitive ability that humans have, regardless of
whether someone has designed a psychology experiment to explore it. This model should generate
predictions, either qualitative or quantitative, that could be tested with a psychology experiment, but the
modeler need not run this experiment. Ideally, a paper using this methodology explores what such an
experiment might look like, and make specific predictions. If the core of the scientific method is theory
generation, hypothesis generation, and hypothesis testing, then focuses just on theory and hypothesis
generation. Although frowned upon in psychology, non-experimental science is an important part of
many disciplines, including theoretical biology, theoretical geology, theoretical sociology, and
especially theoretical physics. Famously, Einstein’s paper on special relativity contained a detailed
model, specific predictions…but only a call for experiments to test his hypotheses . Psychology is
unusual among the sciences in that it does not have a large theoretical subdivision within the discipline
itself. Psychological AI can generate hypotheses about human beings. This is tricky, because the
researcher is not evaluating the hypothesis to either AI’s or psychology’s highest standards. However, if
the AI is interesting enough, and built on solid cognitive principles, it can be of value. For example, one
might build a spatial inference system based on what is known of spatial inference. The model could
behave in ways that make predictions that could be tested in the laboratory on people. Robert West
(personal communication) calls this “forward engineering.”

Rather than judge this kind of modeling on the standards of experimental psychology, think of it as
computational philosophy. Philosophers doing theoretical psychology often do not run experiments that
could test their theories, yet they can point experimental work in fruitful directions.If the model exhibits
novel behavior or uses a different method, then the very fact that the model worked as designed is an
acceptable evaluation all by itself, even without statistical testing. Like the plane in Car World, the
model does something brand new. Showing that a model can work at all is an important step towards
showing that cognition works that way in humans. Still, a system that demonstrates novel functionality
might be hard to evaluate: the more novel the functionality, the fewer other works will be appropriate for
comparison.

Part 2: Engineering AI
Engineering AI may appear irrelevant to cognitive science. Whether it is depends on one’s conception
of what cognitive science research is and should be. For our purposes, there are two aspects to a
conception of what cognitive science should be: its proper research methodologies and its proper subject
matter.

 Cognitive Science Methodology

One aspect of cognitive science methodology is its multi disciplinarity: how interdisciplinary does
research need to be to qualify as cognitive science? An inclusive approach includes any research (on the
proper subject matter) in any of cognitive science’s component disciplines. There are varying degrees of
the exclusive approach, but all require some amount of multi disciplinarity. A weakly exclusive
approach might merely require comparing findings with problems and theories in another participating
discipline. We will call this simply “interdisciplinarity.” However, this would exclude from cognitive
science many historically interesting findings. For example, this would likely exclude Quillian’s (1968)
work on semantic memory, even though Collins and Loftus (1975) cite it as the direct inspiration of their
seminal cognitive science work on spreading activation—and both are included in Readings in
Cognitive Science (Collins & Smith, 1988), a required text at the author’s alma mater. A strictly
exclusive approach might require that each piece of research involve methodologies from more than one
discipline. For example, doing a psychology experiment and some cognitive modeling in the same
work. We will call this stricter conception “trans disciplinarity.

 Problems AI in Healthcare

The overall research goal of artificial intelligence is to create technology that allows computers and
machines to function in an intelligent manner. The general problem of simulating intelligence has been
broken down into sub-problems. These consist of particular traits or capabilities that researchers expect
an intelligent system to display. The traits described below have received the most attention.

 Reasoning, problem solving

Early researchers developed algorithms that imitated step-by-step reasoning that humans use when they
solve puzzles or make logical deductions. By the late 1980s and 1990s, AI research had developed
methods for dealing with uncertain or incomplete information, employing concepts from probability and
economics.

For difficult problems, algorithms can require enormous computational resources—most experience a
"combinatorial explosion": the amount of memory or computer time required becomes astronomical for
problems of a certain size. The search for more efficient problem-solving algorithms is a high priority.

Human beings ordinarily use fast, intuitive judgments rather than step-by-step deduction that early AI
research was able to model. AI has progressed using "sub-symbolic" problem solving: embodied agent
approaches emphasize the importance of sensorimotor skills to higher reasoning; neural net research
attempts to simulate the structures inside the brain that give rise to this skill; statistical approaches to AI
mimic the human ability to guess.

 Knowledge representation

An ontology represents knowledge as a set of concepts within a domain and the relationships between
those concepts.

Main articles: Knowledge representation and Comaonsense knowledge

Knowledge representatio and knowledge engineering are central to AI research. Many of the problems
machines are expected to solve will require extensive knowledge about the world. Among the things that
AI needs to represent are: objects, properties, categories and relations between objects; situations,
events, states and time; causes and effects ;knowledge about knowledge (what we know about what
other people know); and many other, less well researched domains. A representation of "what exists" is
an ontology: the set of objects, relations, concepts, and properties formally described so that software
agents can interpret them. The semantics of these are captured as description logic concepts, roles, and
individuals, and typically implemented as classes, properties, and individuals in the Web Ontology
Language. The most general ontologies are called upper ontologies, which attempt to provide a
foundation for all other knowledge [ by acting as mediators between domain ontologies that cover
specific knowledge about a particular knowledge domain (field of interest or area of concern). Such
formal knowledge representations are suitable for content-based indexing and retrieval, scene
interpretation, clinical decision support, knowledge discovery via automated reasoning (inferring new
statements based on explicitly stated knowledge), etc. Video events are often represented as SWRL
rules, which can be used, among others, to automatically generate subtitles for constrained videos.
Among the most difficult problems in knowledge representation are:

Default reasoning and the qualification problem

Many of the things people know take the form of "working assumptions". For example, if a bird comes
up in conversation, people typically picture an animal that is fist sized, sings, and flies. None of these
things are true about all birds. John McCarthy identified this problem in 1969 as the qualification
problem: for any commonsense rule that AI researchers care to represent, there tend to be a huge number
of exceptions. Almost nothing is simply true or false in the way that abstract logic requires. AI research
has explored a number of solutions to this problem.

The breadth of commonsense knowledge

The number of atomic facts that the average person knows is very large. Research projects that attempt
to build a complete knowledge base of commonsense knowledge (e.g., Cyc) require enormous amounts
of laborious ontological engineering—they must be built, by hand, one complicated concept at a time. A
major goal is to have the computer understand enough concepts to be able to learn by reading from
sources like the Internet, and thus be able to add to its own ontology. [citation needed]

The sub symbolic form of some commonsense knowledge

Much of what people know is not represented as "facts" or "statements" that they could express
verbally. For example, a chess master will avoid a particular chess position because it "feels too
exposed" or an art critic can take one look at a statue and realize that it is a fake .These are non-
conscious and sub-symbolic intuitions or tendencies in the human brain. Knowledge like this informs,
supports and provides a context for symbolic, conscious knowledge. As with the related problem of sub-
symbolic reasoning, it is hoped that situated AI, computational intelligence, or statistical AI will provide
ways to represent this kind of knowledge.

 Planning of AI in Healthcare

A hierarchical control system is a form of control system in which a set of devices and governing
software is arranged in a hierarchy.

Main article: Automated planning and scheduling


Intelligent agents must be able to set goals and achieve them. They need a way to visualize the future—
a representation of the state of the world and be able to make predictions about how their actions will
change it—and be able to make choices that maximize the utility of available choices.

In classical planning problems, the agent can assume that it is the only system acting in the world,
allowing the agent to be certain of the consequences of its actions. However, if the agent is not the only
actor, then it requires that the agent can reason under uncertainty. This calls for an agent that can not
only assess its environment and make predictions, but also evaluate its predictions and adapt based on its
assessment.

Multi-agent planning uses the cooperation and competition of many agents to achieve a given goal.
Emergent behavior such as this is used by evolutionary algorithms and swarm intelligence.

 Learning AI in Healthcare

Main article: Machine learning

Machine learning, a fundamental concept of AI research since the field's inception, is the study of
computer algorithms that improve automatically through experience.

Unsupervised learning is the ability to find patterns in a stream of input. Supervised learning includes
both classification and numerical regression. Classification is used to determine what category
something belongs in, after seeing a number of examples of things from several categories. Regression
is the attempt to produce a function that describes the relationship between inputs and outputs and
predicts how the outputs should change as the inputs change. In reinforcement learning the agent is
rewarded for good responses and punished for bad ones. The agent uses this sequence of rewards and
punishments to form a strategy for operating in its problem space. These three types of learning can be
analyzed in terms of decision theory, using concepts like utility. The mathematical analysis of machine
learning algorithms and their performance is a branch of theoretical computer science known as
computational learning theory. [citation needed]

 Natural language processing


A parse tree represents the syntactic structure of a sentence according to some formal grammar.

Main article: Natural language processing

Natural language processing gives machines the ability to read and understand human language. A
sufficiently powerful natural language processing system would enable natural language user
interfacesand the acquisition of knowledge directly from human-written sources, such as newswire texts.
Some straightforward applications of natural language processing include information retrieval, text
mining, question answering and machine translation.

A common method of processing and extracting meaning from natural language is through semantic
indexing. Although these indexes require a large volume of user input, it is expected that increases in
processor speeds and decreases in data storage costs will result in greater efficiency.

 Perception
Main articles: Machine perception, Computer vision, and Speech recognition

Machine perception is the ability to use input from sensors (such as cameras, microphones, tactile
sensors, sonar and others) to deduce aspects of the world. Computer vision is the ability to analyze
visual input. A few selected subproblems are speech recognition, facial recognition and object
recognition.

Motion and manipulation

Main article: Robotics

The field of robotics is closely related to AI. Intelligence is required for robots to handle tasks such as
object manipulation and navigation, with sub-problems such as localization, mapping, and motion
planning. These systems require that an agent is able to: Be spatially cognizant of its surroundings, learn
from and build a map of its environment, figure out how to get from one point in space to another, and
execute that movement (which often involves compliant motion, a process where movement requires
maintaining physical contact with an object

 Social intelligence
Main article: Affective computing
Kismet, a robot with rudimentary social skills

Affective computing is the study and development of systems that can recognize, interpret, process, and
simulate human affects. It is an interdisciplinary field spanning computer sciences, psychology, and
cognitive science. While the origins of the field may be traced as far back as the early philosophical
inquiries into emotion, the more modern branch of computer science originated with Rosalind Picard's
1995 paper[ on "affective computing". A motivation for the research is the ability to simulate empathy,
where the machine would be able to interpret human emotions and adapts its behavior to give an
appropriate response to those emotions.

Emotion and social skills are important to an intelligent agent for two reasons. First, being able to
predict the actions of others by understanding their motives and emotional states allow an agent to make
better decisions. Concepts such as game theory, decision theory, necessitate that an agent be able to
detect and model human emotions. Second, in an effort to facilitate human–computer interaction, an
intelligent machine may want to display emotions to appear more sensitive to the emotional dynamics of
human interaction.

 General intelligence

Main articles: Artificial general intelligence and AI-complete

Many researchers think that their work will eventually be incorporated into a machine with artificial
general intelligence, combining all the skills mentioned above and even exceeding human ability in most
or all these areas. A few believe that anthropomorphic features like artificial consciousness or an
artificial brain may be required for such a project.

Many of the problems above also require that general intelligence be solved. For example, even specific
straightforward tasks, like machine translation, require that a machine read and write in both languages
(NLP), follow the author's argument (reason), know what is being talked about (knowledge), and
faithfully reproduce the author's original intent (social intelligence). A problem like machine translation
is considered "AI-complete", but all of these problems need to be solved simultaneously in order to
reach human-level machine performance.

Approaches AI in Healthcare
There is no established unifying theory or paradigm that guides AI research. Researchers disagree about
many issues. A few of the most long standing questions that have remained unanswered are these:
should artificial intelligence simulate natural intelligence by studying psychology or neurology? Or is
human biology as irrelevant to AI research as bird biology is to aeronautical engineering? Can
intelligent behavior be described using simple, elegant principles Or does it necessarily require solving
a large number of completely unrelated problems? Can intelligence be reproduced using high-level
symbols, similar to words and ideas? Or does it require "sub-symbolic" processing? John Rangeland,
who coined the term GOFAI (Good Old-Fashioned Artificial Intelligence), also proposed that AI should
more properly be referred to as synthetic intelligence, a term which has since been adopted by some
non-GOFAI researchers.

Stuart Shapiro divides AI research into three approaches, which he calls computational psychology,
computational philosophy, and computer science. Computational psychology is used to make computer
programs that mimic human behavior. Computational philosophy, is used to develop an adaptive, free-
flowing computer mind. Implementing computer science serves the goal of creating computers that can
perform tasks that only people could previously accomplish .Together, the humanesque behavior, mind,
and actions make up artificial intelligence.

 Cybernetics and brain simulation

Main articles: Cybernetics and Computational neuroscience

In the 1940s and 1950s, a number of researchers explored the connection between neurology,
information theory, and cybernetics. Some of them built machines that used electronic networks to
exhibit rudimentary intelligence, such as W. Grey Walter's turtles and the Johns Hopkins Beast. Many of
these researchers gathered for meetings of the Teleological Society at Princeton University and the Ratio
Club in England.

 Cognitive simulation

Economist Herbert Simon and Allen Newell studied human problem-solving skills and attempted to
formalize them, and their work laid the foundations of the field of artificial intelligence, as well as
cognitive science, operations research and management science. Their research team used the results of
psychological experiments to develop programs that simulated the techniques that people used to solve
problems. This tradition, centered at Carnegie Mellon University would eventually culminate in the
development of the Soar architecture in the middle 1980s.

 Anti-logic or scruffy
Researchers at MIT found that solving difficult problems in vision and natural language processing
required ad-hoc solutions – they argued that there was no simple and general principle that would
capture all the aspects of intelligent behavior. Roger Schank described their "anti-logic" approaches as
"scruffy". Commonsense knowledge based are an example of "scruffy" AI, since they must be built by
hand, one complicated concept at a time.

 Embodied intelligence AI in Healthcare

This includes embodied, situated, behavior-based, and nouvelle AI. Researchers from the related field
of robotics, such as Rodney Brooks, rejected symbolic AI and focused on the basic engineering
problems that would allow robots to move and survive. Their work revived the non-symbolic viewpoint
of the early cybernetics researchers of the 1950s and reintroduced the use of control theory in AI. This
coincided with the development of the embodied mind thesis in the related field of cognitive science: the
idea that aspects of the body are required for higher intelligence.

 Computational intelligence and soft computing AI in Healthcare

Interest in neural networks and "connectionism" was revived by David Rumelhart and others in the
middle of the 1980s.Neural networks are an example of soft computing --- they are solutions to
problems which cannot be solved with complete logical certainty, and where an approximate solution is
often sufficient. Other soft computing approaches to AI include fuzzy systems, evolutionary
computation and many statistical tools. The application of soft computing to AI is studied collectively
by the emerging discipline of computational intelligence.

 Integrating the approaches AI in Healthcare

Intelligent agent paradigm


An intelligent agent is a system that perceives its environment and takes actions which maximize its
chances of success. The simplest intelligent agents are programs that solve specific problems. More
complicated agents include human beings and organizations of human beings. The paradigm gives
researchers license to study isolated problems and find solutions that are both verifiable and useful,
without agreeing on one single approach. An agent that solves a specific problem can use any approach
that works – some agents are symbolic and logical, some are sub-symbolic neural networks and others
may use new approaches. The paradigm also gives researchers a common language to communicate
with other fields—such as decision theory and economics—that also use concepts of abstract agents.
The intelligent agent paradigm became widely accepted during the 1990s.

 Tools AI in Healthcare

In the course of 60 or so years of research, AI has developed a large number of tools to solve the most
difficult problems in computer science. A few of the most general of these methods are discussed below.

 Logic AI in Healthcare

Main articles:

Logic programming and Automated reasoning Logic is used for knowledge representation and
problem solving, but it can be applied to other problems as well. For example, the sat plan algorithm
uses logic for planning and inductive logic programming is a method for learning.

Several different forms of logic are used in AI research. Propositional or sentential logic is the logic of
statements which can be true or false. First-order logic also allows the use of quantifiers and predicates,
and can express facts about objects, their properties, and their relations with each other. Fuzzy logic, is a
version of first-order logic which allows the truth of a statement to be represented as a value between 0
and 1, rather than simply True or False. Fuzzy systems can be used for uncertain reasoning and have
been widely used in modern industrial and consumer product control systems. Subjective logic[citation
needed] models uncertainty in a different and more explicit manner than fuzzy-logic: a given binomial
opinion satisfies belief + disbelief + uncertainty = 1 within a Beta distribution. By this method,
ignorance can be distinguished from probabilistic statements that an agent makes with high confidence.
 Probabilistic methods for uncertain reasoning

Main articles: Bayesian network, Hidden Markov model, Kalman filter, Particle filter, Decision theory,
and Utility theory

Many problems in AI (in reasoning, planning, learning, perception and robotics) require the agent to
operate with incomplete or uncertain information. AI researchers have devised a number of powerful
tools to solve these problems using methods from probability theory and economics.

Bayesian networks are a very general tool that can be used for a large number of problems: reasoning
learning expectation- planning and perception. Probabilistic algorithms can also be used for filtering,
prediction, smoothing and finding explanations for streams of data, helping perception systems to
analyze processes that occur over time.

A key concept from the science of economics is "utility": a measure of how valuable something is to an
intelligent agent. Precise mathematical tools have been developed that analyze how an agent can make
choices and plan, using decision theory, decision analysis, and information value theory. These tools
include models such as Markov decision processes dynamic decision networks, game theory and
mechanism design.

 Classifiers and statistical learning methods

Main articles: Classifier (mathematics), Statistical classification, and Machine learning

The simplest AI applications can be divided into two types: classifiers and controllers . Controllers do,
however, also classify conditions before inferring actions, and therefore classification forms a central
part of many AI systems. Classifiers are functions that use pattern matching to determine a closest
match. They can be tuned according to examples, making them very attractive for use in AI. These
examples are known as observations or patterns. In supervised learning, each pattern belongs to a certain
predefined class. A class can be seen as a decision that has to be made. All the observations combined
with their class labels are known as a data set. When a new observation is received, that observation is
classified based on previous experience.

A classifier can be trained in various ways; there are many statistical and machine learning approaches.
The most widely used classifiers are the neural network, kernel methods such as the support vector
machine, k-nearest neighbor algorithm, Gaussian mixture model, naive Bayes classifier, and decision
tree. The performance of these classifiers have been compared over a wide range of tasks. Classifier
performance depends greatly on the characteristics of the data to be classified. There is no single
classifier that works best on all given problems; this is also referred to as the "no free lunch" theorem.
Determining a suitable classifier for a given problem is still more an art than science.

 Neural networks

Main articles: Artificial neural network and Connectionism

A neural network is an interconnected group of nodes, akin to the vast network of neurons in the human
brain.

Neural networks are modeled after the neurons in the human brain, where a trained algorithm determines
an output response for input signals. The study of non-learning artificial neural networks began in the
decade before the field of AI research was founded, in the work of Walter Pitts and Warren
McCullouch. Frank Rosenblatt invented the perceptron, a learning network with a single layer, similar to
the old concept of linear regression. Early pioneers also include Alexey Grigorevich Ivakhnenko, Teuvo
Kohonen, Stephen Grossberg, Kunihiko Fukushima, Christoph von der Malsburg, David Willshaw,
Shun-Ichi Amari, Bernard Widrow, John Hopfield, Eduardo R. Caianiello, and others.

The main categories of networks are acyclic or feed forward neural networks (where the signal passes
in only one direction) and recurrent neural networks (which allow feedback and short-term memories of
previous input events). Among the most popular feed forward networks are perceptrons, multi-layer
perceptrons and radial basis networks. Neural networks can be applied to the problem of intelligent
control (for robotics) or learning, using such techniques as Hebbian learning, GMDH or competitive
learning.

Today, neural networks are often trained by the back propagation algorithm, which had been around
since 1970 as the reverse mode of automatic differentiation published by Seppo Linnainmaa, and was
introduced to neural networks by Paul Werbos.

 Deep feed forward neural networks

Main article: Deep learning


Deep learning is any artificial neural network that can learn a long chain of causal links. For example, a
feed forward network with six hidden layers can learn a seven-link causal chain (six hidden layers +
output layer) and has a "credit assignment path" (CAP) depth of seven. Many deep learning systems
need to be able to learn chains ten or more causal links in length. Deep learning has transformed many
important subfields of artificial intelligence, including computer vision, speech recognition, natural
language processing and others.

According to one overview, the expression "Deep Learning" was introduced to the Machine Learning
community by Rina Dechter in 1986 and gained traction after Igor Aizenberg and colleagues introduced
it to Artificial Neural Networks in 2000. The first functional Deep Learning networks were published by
Alexey Grigorevich Ivakhnenko and V. G. Lapa in 1965. These networks are trained one layer at a time.
Ivakhnenko's 1971 paper describes the learning of a deep feed forward multilayer perceptron with eight
layers, already much deeper than many later networks. In 2006, a publication by Geoffrey Hinton and
Ulsan Salakhutdinov introduced another way of pre-training many-layered feed forward neural networks
(FNNs) one layer at a time, treating each layer in turn as an unsupervised restricted Boltzmann machine,
then using supervised backpropagation for fine-tuning. Similar to shallow artificial neural networks,
deep neural networks can model complex non-linear relationships. Over the last few years, advances in
both machine learning algorithms and computer hardware have led to more efficient methods for
training deep neural networks that contain many layers of non-linear hidden units and a very large
output layer.

Deep learning often uses convolutional neural networks (CNNs), whose origins can be traced back to
the Neocognitron introduced by Kunihiko Fukushima in 1980 In 1989, Yann LeCun and colleagues
applied backpropagation to such an architecture. In the early 2000s, in an industrial application CNNs
already processed an estimated 10% to 20% of all the checks written in the US. Since 2011, fast
implementations of CNNs on GPUs have won many visual pattern recognition competitions.

Control theory

Main article: Intelligent control


Control theory, the grandchild of cybernetics, has many important applications, especially in robotics.

 Evaluating progress AI in Healthcare

Main article: Progress in artificial intelligence

In 1950, Alan Turing proposed a general procedure to test the intelligence of an agent now known as
the Turing test. This procedure allows almost all the major problems of artificial intelligence to be
tested. However, it is a very difficult challenge and at present all agents fail.

Artificial intelligence can also be evaluated on specific problems such as small problems in chemistry,
hand-writing recognition and game-playing. Such tests have been termed subject matter expert Turing
tests. Smaller problems provide more achievable goals and there are an ever-increasing number of
positive results.

For example, performance at draughts is optimal performance at chess is high-human and nearing
super-human and performance at many everyday tas is sub-human.

A quite different approach measures machine intelligence through tests which are developed from
mathematical definitions of intelligence. Examples of these kinds of tests start in the late nineties
devising intelligence tests using notions from Kolmogorov complexity and data compression. Two
major advantages of mathematical definitions are their applicability to nonhuman intelligences and their
absence of a requirement for human testers.

A derivative of the Turing test is the Completely Automated Public Turing test to tell Computers and
Humans Apart . As the name implies, this helps to determine that a user is an actual person and not a
computer posing as a human. In contrast to the standard Turing test, CAPTCHA is administered by a
machine and targeted to a human as opposed to being administered by a human and targeted to a
machine. A computer asks a user to complete a simple test then generates a grade for that test.
Computers are unable to solve the problem, so correct solutions are deemed to be the result of a person
taking the test.

 The Emergence of AI & its Significance


The term “artificial intelligence” was coined at a conference at Dartmouth College in 1956. Until 1974,
AI consisted of work that included reasoning for solving problems in geometry and algebra and
communicating in natural language.

Between 1980 and 1987, there was a rise in expert systems that answered questions or solved problems
about specific knowledge. Interest in AI declined until IBM’s Deep Blue, a chess-playing computer,
defeated Russian grandmaster Garry Kasparov in 1997. Since then, other AI achievements have come to
include handwriting recognition, testing for autonomous vehicles, the first domestic or pet robot, and
humanoid robots.

In February 2011, IBM’s Watson, defeated the two greatest Jeopardy! Champions in an exhibition
match. Last year, DeepMind’s (Google) AlphaGo, an AI computer program, beat a human professional
player in a game of Go. Today, big data, faster computers and advanced machine learning all play a role
in the development of artificial intelligence.

AI has many applications in a myriad of industries, including finance, transportation and healthcare —
 which will change how the industry diagnoses and treats illnesses. AI has been applied to object, face,
speech and handwriting recognition; virtual reality and image processing; natural language processing,
chatbots and translation; email spam filtering, robotics and data mining. According to market intelligence
firm, Tractica, the annual worldwide AI revenue will grow to $36.8 billion by 2025

AI is leading to advancements in healthcare treatments, such as improving the organization of treatment


plans, analyzing data to provide better treatment plans, and monitoring treatments.

AI has the ability to quickly and more accurately identify signs of disease in medical images, like MRI,
CT scans, ultrasound and x-rays, and therefore allows faster diagnostics reducing the time patients wait
for a diagnosis from weeks to mere hours and accelerating the introduction of treatment options.
 Virtual Assistants

In this day and age when people expect to get answers instantly, virtual assistants enable patients to get
answers in real time. Patients can ask medical questions and receive answers, get more information and
reminders about taking medications, report information to physicians, and gain other medical support.
Physicians can also take advantage of healthcare virtual assistants by tracking and following through with
orders and making sure they are ordering the correct medication for patients.

 Reduce Costs

Frost & Sullivan reports that AI has the potential to improve outcomes by 30- 40% and reduce the cost
of treatment by as much as 50%. Improvements in precision and efficiency means fewer human errors,
leading to a decrease in doctor visits. Doctors are also able to get information from data for patients who
are at risk of certain diseases to prevent hospital re-admissions.

On a larger scale, according to Healthcare IT news, potential cost savings in AI applications in billions
of dollars are:
According to Accenture, key clinical health AI applications can generate $150 billion in savings
annually for the healthcare economy in the United States by 2026.

 Treatment Plans

Another benefit of AI in healthcare is the ability to design treatment plans. Doctors can now search a
database, such as Modernizing Medicine, a medical assistant used to collect patient information, record
diagnoses, order tests and prescriptions and prepare billing information. Moreover, the ability to search
public databases with information from thousands of doctors and patient cases can help physicians
administer better personalized treatments or find comparable cases.

AI Risks in Healthcare

 Accuracy and Safety

Since AI is fairly new, it has the potential to be less accurate and reliable thereby putting patients at risk.
The BBC article, The Real Risk of Artificial Intelligence addresses this:
“Take a system trained to learn which patients with pneumonia had a higher risk of death, so that they
might be admitted to hospital. It inadvertently classified patients with asthma as being at lower risk. This
was because in normal situations, people with pneumonia and a history of asthma go straight to intensive
care and therefore get the kind of treatment that significantly reduces their risk of dying. The machine
learning took this to mean that asthma + pneumonia = lower risk of death.”

Furthermore, AI has to be reliable enough to keep sensitive data, like addresses and financial and health
information secure. Institutions that handle sensitive medical information need to make sure their sharing
policies keep information safe.
 Risk in new/exceptional health cases

Not only does AI have to be accurate and safe, it has to be created so it is up to date with new health
cases. In other words, a program will only be as good as the data it learns. Programs need to be trained, or
at least constantly updated, to be able to identify new/exceptional health cases.

 Risk for Doctors & Patients

AI can also pose a risk for doctors and patients. Since AI has not been perfected, doctors cannot fully
rely on AI and still need to make decisions based on their knowledge and expertise. Patients are also at
risk for the same reason. If a program provides incorrect information, patients will not be treated
properly.

Challenges for AI in Healthcare

 Adoption

One of the challenges AI faces in healthcare is widespread clinical adoption. To realize the value of AI,
the healthcare industry needs to create a workforce that is knowledgeable about AI so they are
comfortable using AI technologies thereby enabling the AI technologies to “learn” and grow smarter.

 Training Doctors/Patients

Another challenge is training doctors and patients to use AI. Learning how to use technology may be a
challenge for some. Likewise, not everyone is open to information given by a “robot.” In other words,
accepting AI technology is a challenge that needs to be addressed through education.

 Regulations
Complying with regulations is also a challenge for AI in the healthcare industry. For one, there is the
need for approvals from FDA before an AI device or application is applied to health care. This is
especially true because AI is at a nascent stage and not a technology that is fully known or understood.
Moreover, the existing approval process deals more with AI hardware and not about data. Therefore, data
from AI poses a new regulatory challenge for FDA and needs to be validated more thoroughly.

APPLICATIONS OF ARTIFICIAL INTELLIGENCE IN HEALTHCARE

1. Managing Medical Records and Other Data. Since the first step in health care is compiling and
analyzing information data management is the most widely used application of artificial intelligence
and digital automation. Robots collect, store, re-format, and trace data to provide faster, more
consistent access.

2. Doing Repetitive Jobs Analyzing tests, X-Rays, CT scans, data entry, and other mundane tasks can
all be done faster and more accurately by robots. Cardiology and radiology are two disciplines where the
amount of data to analyze can be overwhelming and time consuming.

3. Treatment Design Artificial intelligence systems have been created to analyze data – notes and
reports from a patient’s file, external research, and clinical expertise – to help select the correct,
individually customized treatment path.

4. Digital Consultation Apps like Babylon in the UK use AI to give medical consultation based on
personal medical history and common medical knowledge.

5. Virtual Nurses The startup Sense.ly has developed Molly, a digital nurse to help people monitor
patient’s condition and follow up without treatments, between doctor visits. The program uses machine
learning to support patients, specializing in chronic illnesses.

6. Medication Management

The National Institutes of Health have created the AiCure app to monitor the use of medication by a
patient. A smartphone’s webcam is partnered with AI to autonomously confirm that patients are taking
their prescriptions and helps them manage their condition.
7. Drug Creation

Developing pharmaceuticals through clinical trials can take more than a decade and cost billions of
dollars. Making this process faster and cheaper could change the world.

The program found two medications that may reduce Ebola infectivity in one day, when analysis of this
type generally takes months or years – a difference that could mean saving thousands of lives.

8. Precision Medicine

Genetics and genomics look for mutations and links to disease from the information in DNA. With the
help of AI, body scans can spot cancer and vascular diseases early and predict the health issues people
might face based on their genetics.

9. Health Monitoring

Wearable health trackers – like those from FitBit, Apple, Garmin and others – monitors heart rate and
activity levels. They can send alerts to the user to get more exercise and can share this information to
doctors (and AI systems) for additional data points on the needs and habits of patients.

10. Healthcare System Analysis

In the Netherlands, 97% of healthcare invoices are digital. A Dutch company uses AI to sift through the
data to highlight mistakes in treatments, workflow inefficiencies, and helps area healthcare systems
avoid unnecessary patient hospitalizations.

These are just a sample of the solutions AI is offering the healthcare industry. As innovation pushes the
capabilities of automation and digital workforces.

 Conclusion .
Artificial Intelligence is maturing science which have applications in different fields including medicinal
services framework. Advance of counterfeit consciousness has diminished the human endeavours and at
last prompts to simple and quick, www.tsijournals.com | April 2017 4 practical finding of different
frightful ailments. Manmade brainpower is likewise useful to watch out for the everyday schedule life.
The essential part of Artificial Intelligence in patient care is quiet finding and picture examination, the
future holds incredible potential for applying AI to enhance numerous parts of the patient care handle.
Incredible difficulties stay because of the wellbeing information's size and intricacy; however, the AI
people group is well on its approach to meeting these difficulties by growing new example recognition
methods, adaptable calculations, and novel methodologies that utilization enormous amounts of
wellbeing information to answer general inquiries.

 References

1. Thompson, W.B, Fleming, R.W., Creem-Regehr, S.H. & Stefanucci, J.K. (2011). Visual Perception
from a Computer Graphics Perspective. CRC Press.

2 .Russell, S. & Norvig, P. (2003). Artificial Intelligence: A Modern Approach, 2e. Upper Saddle
River, New Jersey: Prentice Hall.

3.Artificial intelligence in healthcare - Wikipedia


https://en.wikipedia.org/wiki/Artificial_intelligence_in_healthcareArtificial intelligence (AI)
in healthcare uses algorithms and software to approximate human cognition in the analysis of complex
medical data. The primary aim of health-related AI applications is to analyze relationships between
prevention or treatment techniques and patient outcomes. AI programs have been developed ...

4.Artificial Intelligence for Health and Health Care - HealthIT.gov


https://www.healthit.gov/sites/default/files/jsr-17-task-002_aiforhealthandhealthcare12122017.pdfThis
study centers on how computer-based decision procedures, under the broad umbrella of artificial
intelligence (AI), can assist in improving health and health care. Although advanced statistics and
machine learning provide the foundation for AI, there are currently revolutionary advances underway in
the sub-field of ...

5 .Gen pact official site - Digital Healthcare solutions

www.genpact.com/healthcare • Site secured by Norton

Extending the power of digital to generate efficiency in healthcare. Learn more.

Genpact Cora AI Platform • Design thinking • Digital solutions

Healthcare Provider & Management Solutions Transformation | Genpact


Genpact transforming healthcare provider industry by providing ...

6. 12 Ways Artificial Intelligence Will Transform Health Care ...

www.hhnmag.com/articles/6561

12 Ways Artificial Intelligence Will Transform Health Care ... Health care has been artificially shielded from that
business ... The Role of the Pathologist in ...

Você também pode gostar