Você está na página 1de 51

ARTIFICIAL INTELLIGENCE

7th sem
MODULE 1:
 Introduction-what is artificial intelligence ?
 Foundations of artificial intelligence

 History of artificial intelligence

 Problem solving- formulating problems

 Problem type, states and operators

 State space

 Search strategies
According to the father of Artificial Intelligence, John
McCarthy, it is “The science and engineering of
making intelligent machines, especially
intelligent computer programs”.

Artificial Intelligence is a way of making a computer,


a computer-controlled robot, or a software think
intelligently, in the similar manner the intelligent
humans think.

In computer science, the field of AI research defines


itself as the study of "intelligent agents"
TYPES OF INTELLIGENCE

 Linguistic intelligence: The ability to speak


 Musical intelligence:

 Logical-mathematical intelligence: The


ability of use and understand relationships in the
absence of action or objects.
 Spatial intelligence: The ability to perceive
visual or spatial information ex. maps
 Bodily-Kinesthetic intelligence: The ability to
use complete or part of the body to solve
problems
WHAT IS INTELLIGENCE COMPOSED OF?

 Reasoning
 Learning

 Problem Solving

 Perception

 Linguistic Intelligence
 To Create Expert Systems − The systems which
exhibit intelligent behavior, learn, demonstrate,
explain, and advice its users.

 To Implement Human Intelligence in Machines −


Creating systems that understand, think, learn, and
behave like humans.
CONT.
Engineering Goal
To solve real-world problems. Build systems that exhibit
intelligent behavior.

Scientific Goal
To understand what kind of computational mechanisms
are needed for modeling intelligent behavior.
Artificial intelligence is a science and technology based on disciplines
such as Computer Science, Biology, Psychology, Linguistics,
Mathematics, and Engineering. A major thrust of AI is in the
development of computer functions associated with human
intelligence, such as reasoning, learning, and problem solving.
In the real world, the knowledge has some unwelcomed
properties −
 Its volume is huge, next to unimaginable.

 It is not well-organized or well-formatted.

 It keeps changing constantly.

AI Technique is a manner to organize and use the


knowledge efficiently in such a way that −
 It should be easily modifiable to correct errors.

 It should be useful in many situations though it is


incomplete or inaccurate.
 Gaming − AI plays crucial role in strategic games
such as chess, poker, tic-tac-toe, etc., where
machine can think of large number of possible
positions based on heuristic knowledge.

 Natural Language Processing − It is possible to


interact with the computer that understands
natural language spoken by humans.

 Expert Systems − There are some applications


which integrate machine, software, and special
information to impart reasoning and advising.
They provide explanation and advice to the users.
CONT.
 Vision Systems − These systems understand, interpret,
and comprehend visual input on the computer. For example,
 A spying aeroplane takes photographs, which are used to
figure out spatial information or map of the areas.
 Doctors use clinical expert system to diagnose the patient.
 Police use computer software that can recognize the face of
criminal with the stored portrait made by forensic artist.

 Speech Recognition − Some intelligent systems are


capable of hearing and comprehending the language in
terms of sentences and their meanings while a human talks
to it. It can handle different accents, slang words, noise in
the background, change in human’s noise due to cold, etc.
CONT.
 Handwriting Recognition − The handwriting
recognition software reads the text written on paper by
a pen or on screen by a stylus. It can recognize the
shapes of the letters and convert it into editable text.

 Intelligent Robots − Robots are able to perform the


tasks given by a human. They have sensors to detect
physical data from the real world such as light, heat,
temperature, movement, sound, bump, and pressure.
They have efficient processors, multiple sensors and
huge memory, to exhibit intelligence. In addition, they
are capable of learning from their mistakes and they
can adapt to the new environment.
 1923 Karel Čapek play named “Rossum's Universal Robots” (RUR)
opens in London, first use of the word "robot" in English.

 1943 Foundations for neural networks laid.

 1945 Isaac Asimov, a Columbia University alumni, coined the


term Robotics.

 1956 John McCarthy coined the term Artificial Intelligence.


Demonstration of the first running AI program at Carnegie Mellon
University.

 1958 John McCarthy invents LISP programming language for AI.

 1964 Danny Bobrow's dissertation at MIT showed that computers can


understand natural language well enough to solve algebra word
problems correctly.
CONT.
1990 Major advances in all areas of AI −
 Significant demonstrations in machine learning

 Case-based reasoning

 Multi-agent planning

 Scheduling

 Data mining, Web Crawler

 natural language understanding and translation

 Vision, Virtual Reality

 Games
CONT.
An agent is something that acts in an
environment - it does something.

An agent acts intelligently when


 what it does is appropriate for its
circumstances and its goals,
 it is flexible to changing environments and
changing goals,
 it learns from experience,

 IdealRational Agent?-An ideal rational agent


is the one, which is capable of doing expected
actions to maximize its performance measure
ARCHITECTURE OF AGENTS
THE STRUCTURE OF INTELLIGENT
AGENTS
 Agent = Architecture + Agent Program

 Simple Reflex Agents: They choose actions only


based on the current percept.
 Model Based Reflex Agents: They use a model
of the world to choose their actions.
 Goal Based Agents: They choose their actions
in order to achieve goals.
 Utility Based Agents: They choose actions
based on a preference (utility) for each state.
TURING TEST

 The success of an intelligent behavior of a system


can be measured with Turing Test.

 Two persons and a machine to be evaluated


participate in the test. Out of the two persons,
one plays the role of the tester.

 This test aims at fooling the tester. If the tester


fails to determine machine’s response from the
human response, then the machine is said to be
intelligent.
PROPERTIES OF ENVIRONMENT

 Discrete / Continuous
 Observable / Partially Observable

 Static / Dynamic

 Single agent / Multiple agents

 Accessible / Inaccessible

 Deterministic / Non-deterministic

 Episodic / Non-episodic
AI method can be divided into 2 broad categories:

Symbolic method,which focuses on the knowledge based


system (KBS); and

Computational intelligence, which includes such


methods as neural network (NN), fuzzy system (FS).
Problem Formulation: is the process of deciding what actions
and states to consider, given a goal.

Problem Space − It is the environment in which the search takes


place. (A set of states and set of operators to change those states)

Task environments
To design a rational agent we need to specify a task environment
(a problem specification for which the agent is a solution)
PEAS: to specify a task environment
• Performance measure
• Environment
• Actuators
• Sensors
EXAMPLE
PEAS: Specifying an automated taxi driver
Performance measure:
safe, fast, legal, comfortable, maximize profits
Environment:
roads, other traffic, pedestrians, customers
Actuators:
steering, accelerator, brake, signal, horn
Sensors:
cameras, sonar, speedometer, GPS
PEAS: MEDICAL DIAGNOSIS SYSTEM
 Performance measure:
Healthy patient, minimize costs,
 Environment:

hospital, staff
 Actuators:

Screen display (form including: questions, tests,


diagnoses, treatments, referrals)
 Sensors:

Keyboard (entry of symptoms, findings, patient's


answers)
 Problem Instance − It is Initial state + Goal state.
 Problem Space Graph − It represents problem state. States
are shown by nodes and operators are shown by edges.
 Depth of a problem − Length of a shortest path or shortest
sequence of operators from Initial State to goal state.
 Space Complexity − The maximum number of nodes that are
stored in memory.
 Time Complexity − The maximum number of nodes that are
created.
 Admissibility − A property of an algorithm to always find an
optimal solution.
 Branching Factor − The average number of child nodes in the
problem space graph.
 Depth − Length of the shortest path from initial state to goal
state.
CONT.
EXAMPLE SEARCH PROBLEM: HOLIDAY
IN ROMANIA
Holiday in Romania II

 On holiday in Romania; currently in Arad


•Flight leaves tomorrow from Bucharest
 Formulate goal
•Be in Bucharest
 Formulate search problem
•States: various cities
•Actions: drive between cities
•Performance measure: minimize distance
 Find solution
•Sequence of cities; e.g. Arad, Sibiu, Fagaras,
Bucharest, …
CONT.
 Search is the process of looking for a sequence of
actions that reaches the goal.
 A search algorithm takes a problem as input and
returns a solution in the form of an action sequence.
 Once a solution is found, the actions it recommends can
be carried out. This is called the execution.
A solution is a sequence of
actions from the initial state to
a goal state.

Optimal Solution: A solution is


optimal if no solution has a
lower path cost.
(number of actions to reach goal )
COMPONENTS OF A PROBLEM

 Initial State
 Actions

 Transition Model

 Goal Test

 Path Cost
 (Initial State)
As name suggest, the initial state agent starts in. For
example, in tourist example, the initial state of our
agent is in Arad, i.e., IN(Arad).

 (Actions)
Description of possible actions given a particular state
s, i.e., ACTIONS(s) will return all possible actions
executable in s and these set of actions will be called
applicable in s. For example, {Go(Sibiu),Go(Timisoara),
Go(Zerind)} are applicable in state Arad.
 (Transition Model)
Describes each action a with respect to state s. i.e.,
RESULT(s,a) will return the state that result by doing
action a in state s. For example,
RESULT(IN(Arad),Go(Zerind)) = IN(Zerind)

 (Goal Test)
That determines the given state is goal or not. It is not
always as simple as in our example, i.e., to be in
Bucharest, for example, in a chess game, the goal state
is “Checkmate”.
 State space: the set of all states reachable from the initial state by
any sequence of actions
•When several operators can apply to each state, this gets large very quickly
•Might be a proper subset of the set of configurations
 Path: a sequence of actions leading from one state sj to another state
sk

 Frontier : those states that are available for expanding(for applying


legal actions to)

 A state-space problem consists of


 a set of states;
 a distinguished set of states called the start states;
 a set of actions available to the agent in each state;
 an action function that, given a state and an action, returns a new state;
 a set of goal states, often specified as a Boolean function, goal(s), that is
true when s is a goal state; and
 a criterion that specifies the quality of an acceptable solution. For example,
any sequence of actions that gets the agent to the goal state may be
acceptable, or there may be costs associated with actions and the agent may
be required to find a sequence that has minimal total cost. This is called an
optimal solution. Alternatively, it may be satisfied with any solution that
is within 10% of optimal.
PROBLEM FORMULATION
 All five components, initial state, actions, transition model, goal test and
path cost defined above forms a problem formulation.

 This formulation is abstract, i.e., details are hidden

 Abstraction is useful since they simplify the problem by hiding many


details but still covering the most important information about states and
actions (retaining the state space in simple form), therefore abstraction
needs to be valid.

 Abstraction is called valid when the abstract solution can be expanded to


more detailed world.

 Abstraction is useful if the actions in the solution are easier than the
original problem, i.e, no further planning and searching.

 Construction of useful and valid abstraction is challenging.


 Strategies are evaluated along the following
dimensions:
–completeness: does it always find a solution if one exists?
–time complexity: number of nodes generated
–space complexity: maximum number of nodes in memory
–optimality: does it always find a least-cost solution?

 Time and space complexity are measured in


terms of
–b:maximum branching factor(average number of child nodes
for a given node) of the search tree
–d: depth of the least-cost solution
–m: maximum depth of the state space (may be ∞)
BREADTH-FIRST SEARCH

It starts from the root node, explores the neighboring


nodes first and moves towards the next level neighbors.
It generates one tree at a time until the solution is found.
It can be implemented using FIFO queue data structure.
This method provides shortest path to the solution.
 Input: A graph Graph and a starting vertex root of
Graph
 Output: Goal state. The parent links trace the shortest
path back to root
 procedure BFS(G, root):
Q := queue initialized with {root}
while Q is not empty:
current = Q.dequeue()
if current is the goal:
return current for each node n that is adjacent
to current:
if n is not labeled as discovered:
label n as discovered
n.parent = current
Q.enqueue(n)
Uniform Cost Search

 Sorting is done in increasing cost of the path to a node.


It always expands the least cost node. It is identical to
Breadth First search if each transition has the same
cost.
 It explores paths in the increasing order of cost.

 Disadvantage − There can be multiple long paths


with the cost ≤ C*. Uniform Cost search must explore
them all.
Disadvantage −
Since each level of nodes is saved for creating next one,
it consumes a lot of memory space. Space requirement
to store nodes is exponential.

Its complexity depends on the number of nodes. It can


check duplicate nodes.
DEPTH-FIRST SEARCH

 It is implemented in recursion with LIFO stack data


structure. It creates the same set of nodes as Breadth-
First method, only in the different order.
 This recursive nature of DFS can be implemented
using stacks. The basic idea is as follows:
Pick a starting node and push all its adjacent nodes into
a stack.
Pop a node from stack to select the next node to visit
and push all its adjacent nodes into a stack.
Repeat this process until the stack is empty. However,
ensure that the nodes that are visited are marked. This
will prevent you from visiting the same node more than
once. If you do not mark the nodes that are visited and
you visit the same node more than once, you may end
up in an infinite loop
Breadth-First:
–When completeness is important.
–When optimal solutions are important.

Depth-First:
–When solutions are dense and low-cost is important,
especially space costs.
Bidirectional Search

 It searches forward from initial state and


backward from goal state till both meet to
identify a common state.
 The path from initial state is concatenated with
the inverse path from the goal state. Each search
is done only up to half of the total path.
Uniform Cost Search

 Sorting is done in increasing cost of the path to a node.


It always expands the least cost node. It is identical to
Breadth First search if each transition has the same
cost.
 It explores paths in the increasing order of cost.

 Disadvantage − There can be multiple long paths


with the cost ≤ C*. Uniform Cost search must explore
them all.
Iterative Deepening Depth-First Search

 It performs depth-first search to level 1, starts


over, executes a complete depth-first search to
level 2, and continues in such way till the
solution is found.
 It never creates a node until all lower nodes are
generated. It only saves a stack of nodes. The
algorithm ends when it finds a solution at depth
d. The number of nodes created at depth d is bd
and at depth d-1 is bd-1.

Você também pode gostar