Você está na página 1de 30

VYSYA INSTITUTE OF MANAGEMENT STUDIES, SALEM-103 STAFF NAME : YASODHA.

K CLASS: II MCA SUBJECT : ARTIFICIAL INTELLIGENCE DATE: 21/12/2012 UNIT-I INTRODUCTION: Definition: Artificial intelligence is the study of how to make computers do things which, at the moment people do better. Artificial Intelligence is, o The study of the computations that make it possible to perceive, reason and then act. o To build and understand intelligent entities and agents. Various definitions: o Building intelligent entities. o Getting computers to do tasks which require human intelligence. For thousands of years, human try to understand how humans think? The field of Artificial Intelligence attempts not just to understand but also to build intelligent systems which is one of the newest sciences started after World War 2 and the name was given in 1956. AI systematizes and automates intellectual tasks and therefore it is relevant to any sphere of intellectual activity. Things involved in AI: o Ability to interact with the real world 1. To perceive understand and act 2. Eg. Speech recognition 3. Eg. Image Understanding 4. Eg. Ability to take actions o Reasoning and Planning 1. Modeling the external world, given input 2. Solving new problems, planning and making decisions 3. Ability to deal with unexpected problems o Learning and Adaption 1. Our internal models are always being updated Eg: A baby learning to categorize and recognize animals Goal of AI: o Engineering Goal: Solving Real-World Problems o Scientific Goal: Determining ideas representing knowledge, using knowledge & assembling systems ARTIFICIAL INTELLIGENCE: What is AI? Write a note on Turing test approach. The definitions on the left measure success in terms of fidelity to human performance, whereas the ones on the right measure ideal concept of intelligence, which is called as rationality. A system is rational if it does the right thing Systems that think like humans Systems that think rationally The exciting new effort to make computers think The study of the computations that make it machines with minds, in the full and literal sense possible to perceive, reason and act (Winston, (Haugeland, 1985) 1992) Systems that act like humans Systems that act rationally The art of creating machines that perform functions Computational Intelligence is the study of the that require intelligence when performed by people design of intelligent agents (Poole et. al,1998) (Kurzweil, 1990) Human-centered approaches use empirical science, involving hypothesis and experimental confirmation. Rationalist approach involves combination of mathematics and engineering. 1

Acting humanly: Turing Test The Turing Test was proposed by Alan Turing (1950) Computing Machinery and Intelligence The Turing test measures the performance of the intelligent machine against that of a human being.

Goal: The goal of machine is to fool the questioner into believing that it is the person. If the machine succeeds at this, then questioner will conclude that the machine is acting humanly. Process Involved in Turing Test: Requirements: Two Persons and one machine. It is called as Imitation Game o The computer is interrogated by a human via a teletype o It passes if the human cannot tell if there is a computer or human at the other end The Interrogator will not see or speak directly and can communicate to Terminal alone. The Human Interrogator will test the Human and AI system simultaneously by asking questions to both the participants, and verify the result, and see where the result comes from and see whether the AI system passes the Turing test or not. o Example: Sum of N numbers is difficult for human but the AI may not. To pass the Turing test the requirements of the computer are: o Natural language processing: communicate successfully in English. o Knowledge representation: Store what it knows or hears. o Automated reasoning: Use the stored information and answer questions and draw new conclusions. o Machine learning: Adapt new circumstances and detect and extrapolate patterns. Total Turing Test: This test includes a video signals so that questioner can test the subjects perceptual abilities of the machine (i.e) responder. To pass this test the computer needs 2 things o Computer vision--to perceive objects and o Roboticsto manipulate objects and move about. The Turing test helps to study the principles of Intelligence than to duplicate an example. Artificial flight is succeeded by not imitating the birds, but by learning aerodynamics. The goal is to fly so exactly like pigeons and to fool other pigeons. Predictions: Predicted that by 2000, a machine might have a 30% chance of fooling a lay person for 5 minutes. Thinking humanly: Cognitive modeling A machine program is constructed to think like a human but for that we need knowledge about the actual working of human mind. After knowing this we can bring computer program in which its input/output and timing behavior matches with the human behavior. If it matches then it is confirmed that programs execution is working like a human mind. 2

Example: Allen Newell and Herbert Simon developed General Problem Solver (GPS) which aims to compare the trace of reasoning steps with human. Two Approaches: Cognitive Science is the interdisciplinary study of mind and intelligence. Cognitive Neuroscience 1. refers to mental action or 2. process of acquiring knowledge or 3. understanding through thought, experience and sense. Requires scientific theories of internal activities of brain What level of abstraction? Knowledge or Circuits? How to validate? requires 1. Predicting and testing behavior of human subjects (top-down) 2. Direct identification from neurological data (bottom-up) Thinking rationally: "laws of thought" A Greek Philosopher Aristotle Was the first one to define right thinking i.e., irrefutable reasoning processes. Right thinking initiated the field called LOGIC. Logics created intelligent program. Example: Apple is a fruit. All fruits are tasty; therefore Apple is tasty. The thought of Aristotle pays a way for logic and reasoning Several Greek schools developed various forms of logic: notation and rules of derivation for thoughts. Direct line through mathematics and philosophy to modern AI Problems: Need guidelines to solve the problems.(steps) Informal knowledge representation Computational complexity and resources. Acting rationally: rational agent approach Rational behavior: doing the right thing Acting so as to achieve ones goals, given ones belief. The right thing: which is expected to maximize goal achievement, given the available information Doesn't necessarily involve thinking Example: Blinking reflex but thinking should be in the service of rational action. Rational agents: An agent is an entity that perceives and acts An agent is a function from percept histories to action f:P*A For a given set of tasks the agent help is seeked with high performance. Rational agent should act to achieve the best outcome. HISTORY: Explain about history and generation of AI. (5 & 10 Marks) AI History Intellectual roots of AI date back to the early studies of the nature of knowledge and reasoning. The dream of making a computer imitate humans also has a very early history. The concept of intelligent machines is found in Greek mythology. There is a story in the 8 th century A.D about Pygmalion Olio, the legendary king of Cyprus. He fell in love with an ivory statue he made to represent his ideal woman. The king prayed to the goddess Aphrodite, and the goddess miraculously brought the statue to life. Other myths involve human-like artifacts. As a present from Zeus to Europa, Hephaestus created Talos, a huge robot. Talos was made of bronze and his duty was to patrol the beaches of Crete. Aristotle (384-322 BC) developed an informal system of syllogistic logic, which is the basis of the first formal deductive reasoning system. 3

Early in the 17thcentury, Descartes proposed that bodies of animals are nothing more than complex machines. Pascal in 1642 made the first mechanical digital calculating machine. In the 19th century, George Boole developed a binary algebra representing "laws of thought." Charles Babbage & Ada Byron worked on programmable mechanical calculating machines. In the late 19th century and early 20th century, mathematical philosophers built on Boole's initial logic concepts to develop mathematical representations of logic problems. The advent of electronic computers provided a revolutionary advance in the ability to study intelligence. The Gestation of AI (1943-1956): In 1943 McCulloch & Pitts developed a Boolean circuit model of brain. They wrote the paper A Logical Calculus of Ideas Immanent in Nervous Activity , which explained how it is possible for neural networks to compute. Marvin Minsky and Dean Edmonds built the SNARC in 1951, which is the first randomly wired neural network learning machine (SNARC stands for Stochastic Neural-Analog Reinforcement Computer).It was a neural network computer that used 3000 vacuum tubes and a network with 40 neurons. In 1950 Turing wrote an article on Computing Machinery and Intelligence which articulated a complete vision of AI. Turings paper talked of many things, of solving problems by searching through the space of possible solutions, guided by heuristics. He illustrated his ideas on machine intelligence by reference to chess. He even propounded the possibility of letting the machine alter its own instructions so that machines can learn from experience. In 1956 a famous conference took place in Dartmouth. The conference brought together the founding fathers of artificial intelligence for the first time. In this meeting the term Artificial Intelligence was adopted. Early Enthusiasm, Great Expectations (1952-1965): Between 1952 and 1956, Samuel had developed several programs for playing checkers. In 1956, Newell & Simons Logic Theorist was published. It is considered by many to be the first AI program. In 1959, Gelernter developed a Geometry Engine. In 1961 James Slagle (PhD dissertation, MIT) wrote a symbolic integration program , SAINT. It was written in LISP and solved calculus problems at the college freshman level. In 1963, Thomas Evan's program Analogy was developed which could solve IQ test type analogy problems. In 1963, Edward A. Feigenbaum & Julian Feldman published Computers and Thought, the first collection of articles about artificial intelligence. In 1965, J. Allen Robinson invented a mechanical proof procedure, the Resolution Method, which allowed programs to work efficiently with formal logic as a representation language. A Dose of Reality(1966-1974) In 1967, the Dendral program (Feigenbaum, Lederberg, Buchanan, Sutherland at Stanford) was demonstrated which could interpret mass spectra on organic chemical compounds. This was the first successful knowledge-based program for scientific reasoning. In 1969 the SRI robot,Shakey, demonstrated combining locomotion, perception and problem solving. Knowledge Based Program (1969-1979): The years from 1969 to 1979 marked the early development of knowledge-based systems In 1974: MYCIN demonstrated the power of rule-based systems for knowledge representation and inference in medical diagnosis and therapy. Knowledge representation schemes were developed. These included frames developed by Minski. Logic based languages like Prolog and Planner were developed. AI becomes an Industry (1980-1988): In the 1980s, Lisp Machines developed and marketed & Expert systems industry booms. 1981: Japans 10-year Fifth Generation project. The return of NNs and Novel AI (1986 Present): Around 1985, neural networks return to popularity In 1988, there was a resurgence of probabilistic and decision-theoretic methods The early AI systems used general systems, little knowledge. AI researchers realized that specialized knowledge is required for rich tasks to focus reasoning. 4

Influence in all Fields: The 1990's saw major advances in all areas of AI including the following fields: machine learning, data mining intelligent tutoring, case-based reasoning, multi-agent planning, scheduling, uncertain reasoning, natural language understanding and translation, vision, virtual reality, games, and other topics. Rod Brooks' COG Project at MIT, with numerous collaborators, made significant progress in building a humanoid robot 1995: Agents everywhere. 1997: The first official Robo-Cup soccer match featuring table-top matches with 40 teams of interacting robots was held in 1997. In the late 90s, Web crawlers and other AI-based information extraction programs become essential in widespread use of the world-wide-web. Interactive robot pets ("smart toys") become commercially available, realizing the vision of the 18th century novelty toy makers. Famous AI Systems: (2000s): In 2000, the Nomad robot explores remote regions of Antarctica looking for meteorite samples. Listed below are few famous AI system that has been developed over the years. 1. ALVINN: Autonomous Land Vehicle In a Neural Network In 1989, Dean Pomerleau at CMU created ALVINN. This is a system which learns to control vehicles by watching a person drive. It contains a neural network whose input is a 30x32 unit two dimensional camera image. The output layer is a representation of the direction the vehicle should travel. The system drove a car from the East Coast of USA to the west coast, a total of about 2850 miles. Out of this about 50 miles were driven by a human, and the rest solely by the system. 2. Deep Blue In 1997, the Deep Blue chess program created by IBM, beat the current world chess champion. 3. Machine translation A system capable of translations between people speaking different languages will be a remarkable achievement of enormous economic and cultural benefit. Machine translation is one of the important fields of endeavour in AI. 4. Autonomous agents In space exploration, robotic space probes autonomously monitor their surroundings, make decisions and act to achieve their goals. NASA's Mars rovers successfully completed their primary three-month missions in April, 2004. The Spirit rover had been exploring a range of Martian hills that took two months to reach. Spirit's twin, Opportunity, had been examining exposed rock layers inside a crater. 5. Internet agents The explosive growth of the internet has also led to growing interest in internet agents to monitor users' tasks, seek needed information, and to learn which information is most useful Abridged history of AI: 1943 McCulloch & Pitts: Boolean circuit model of brain 1950 Turing's "Computing Machinery and Intelligence" 1956 Dartmouth meeting: "Artificial Intelligence" adopted 195269 Look, Ma, no hands! 1950s Early AI programs, including Samuel's checkers program, Newell & Simon's Logic Theorist, Gelernter's Geometry Engine 1965 Robinson's complete algorithm for logical reasoning 196673 AI discovers computational complexity Neural network research almost disappears 5

196979 Early development of knowledge-based systems 1980-- AI becomes an industry 1986-- Neural networks return to popularity 1987-- AI becomes a science (speech recognition ) 1995-- The emergence of intelligent agents

AI IN EVERYDAY LIFE? (AI APPLICATIONS): AI techniques are used in many common applications Intelligent user interfaces Spell/grammar checkers Context sensitive help systems Medical diagnosis systems Voice/image recognition (more generally, pattern recognition) Scheduling systems (airlines, hotels, manufacturing) Error detection/correction in electronic communication Program verification / compiler and programming language design Web search engines / Web spiders Web personalization and Recommender systems (collaborative/content filtering) Personal agents Credit card verification in e-commerce / fraud detection Data mining and knowledge discovery in databases Computer games INTELLIGENT AGENTS: An agent is anything that can be viewed as: 1. perceiving its environment through sensors and 2. acting upon that environment through actuators. An agent is an entity which is: 1. Situated in some environment. Other Names for agents includes Software agents, wizards, spiders, intelligent software robots, softbots and with various combinations such as software, intelligent, bot.

Assumption: Every agent can perceive its own actions (but not always the effects) Eg: MAILBOTS: Agents that manage and filter E-Mail( to remove spam) Features of intelligent agents: 1. Reactive responds to changes in the environment with a timely fashion. 2. Autonomous control over its own actions 3. goal-oriented Purpose oriented; does not simply act in response to the environment 4. temporally continuous is a continuously running process 5. communicative communicates with other agents, perhaps including people 6. learning changes its behaviour based on its previous experience 7. mobility able to transport itself from one machine to another 8. Flexibilityactions are not scripted, able to respond dynamically by choosing its own actions. 9. Character believable personality and emotional state 6

10. Veracity Will not knowingly communicate false information. 11. Collaborative ability Collaborates with other agents or humans to perform its tasks AGENTS AND ENVIRONMENTS: Explain Different types of agents in detail. Agents: Human agent: eyes, ears, and other organs for sensors; hands, legs, mouth, and other body parts for actuators Robotic agent: o cameras and infrared range finders for sensors; o various motors for actuators A software agent: o Sensory inputs like Keystrokes, file contents, received network packages o Acts on the environment by displaying on the screen, writing files and sending network packets. Environment: o The environment is fundamental in determining the behavior of the agent. o Environments real or abstract, represented as n-dimensional space. Agents and environments: Percept: agents perceptual (knowledge) input at any given instant. Percept sequence: complete history of everything the agent has ever perceived. An agents choice of action at any given instant can depend on the entire percept sequence observed to date. How an Agent should Act: An agents behavior is described by the agent function which maps from percept histories to actions: [f : P* A]. Internally, the agent function will be implemented by an agent program which runs on the physical architecture to produce f. Agent Function & Agent Program: o Agent Function is an abstract mathematical description; o Agent Program is a concrete implementation; running on the agent architecture. Agent = architecture + program.

In complex environments: An agent do not have complete control over its environment, it just have partial control Partial control means that an agent can influence the environment with its actions An action performed by an agent may fail to have the desired effect. Example: Vacuum-cleaner world The vacuum-cleaner is a made-up world. This particular world contains Two locations: A and B 7

The agent Perceives

Percept sequence [A,Clean] [A, Dirty] [B,Clean] [B,Dirty] [A,Clean],[A,Clean] [A,Clean],[A,Dirty] [A,Clean],[A.Clean],[A,Clean] [A,Clean],[A,Clean],[A,Dirty]

Actions Right Suck Left Suck Right Suck Right Suck

Which square it is in & Whether there is a dirt in the square. Actions: Left, Right, Suck, NoOp

GOOD BEHAVIOR: The Concept of Rationality Explain about the good behavior of AI. Rational agents: An agent should strive to "do the right thing, based on what it can perceive and the actions it can perform. The agent is most successful if it does the right thing. Therefore the performance has to be measured. a)Performance measure: Performance measure embodies the criterion for success of an agent's behavior Agent generates sequence of actions in an environment according to the percepts it receives. These sequence of actions go through a sequence of states. Is the sequence is desirable, then the agent has performed well. Example: Vacuum Cleaner Problem A rational agent can maximize this performance measure by Cleaning up the dirt Dumping it all the floor Cleaning it up again. All among these, more suitable measure would reward the agent for having a clean floor. Design of Performance measure: It is better to design what one actually wants in the environment, rather how one thinks the agent should behave. b)Rationality: Rationality depends on four things The performance measure that defines the criterion of success The agents prior knowledge of the environment The actions that the agent can perform The agents percept sequence to date Ideal Rational Agent: For each possible percept sequence, a rational agent should select an action that is expected to maximize its performance measure, given the evidence provided by the percept sequence and whatever built-in knowledge the agent has. Example: Vacuum cleaner agent: Lets assume the following 8

The performance measure awards one point for each clean square at each time step, over a lifetime of 1000 time steps The geography of the environment is known a priori but the dirt distribution and the initial location of the agent are not. Clean squares stay clean and sucking cleans the current square. The Left and Right actions move the agent left and right except when this would take the agent outside the environment, in which case the agent remains where it is The only available actions are Left, Right, Suck and NoOp (do nothing) The agent correctly perceives its location and whether that location contains dirt Under these circumstances the agent is rational: Its expected performance is at least as high as any other agents. Irrational agent: Same agent would be irrational under different circumstances Once all dirt is cleaned up it will oscillate needlessly back and forth.(tile A to B and B to A) If the performance measure includes a penalty of one point for each movement left or right, the agent will fare poorly. A better agent for this case would do nothing once it is sure that all the squares are clean. If the clean squares can become dirty again, the agent should occasionally check and clean them if needed. c)Omniscience, learning and autonomy: An omniscient agent knows the actual outcome of its actions and can act accordingly. Example: John walked along the Gream road one day and saw an old friend across the street. Meanwhile at 33,000 feet, a cargo door falls off a passing airliner and before he make it to the other side of the street he got flattened. This example shows that rationality is not the same as perfection. 1. Rationality maximizes expected performance 2. Perfection maximizes actual performance 3. Doing things in order to modify future percepts sometimes called Information gathering Agent depends on the prior knowledge of its designer rather than on its own percepts, because agent lacks Autonomy. An agent is autonomous if its behavior is determined by its own experience (with ability to learn and adapt) THE NATURE OF ENVIRONMENTS: Specifying the task environment (PEAS): Write a short note on nature of environments? Explain briefly about the properties of the PEAS? PEAS: In designing an agent, the first step must always be to specify the task environment (PEAS) as full as possible. 1. PEAS for an automated taxi driver Performance measure: Safe, fast, legal, comfortable trip, maximize profits Environment: Roads, other traffic, pedestrians, customers Actuators: Steering wheel, accelerator, brake, signal, horn Sensors: Cameras, sonar, speedometer, GPS, odometer, engine sensors, keyboard

2. PEAS for a medical diagnosis system Performance measure: Healthy patient, minimize costs, lawsuits Environment: Patient, hospital, staff 9

Actuators: Screen display (questions, tests, diagnoses, treatments, referrals) Sensors: Keyboard (entry of symptoms, findings, patient's answers) 3. PEAS for a satellite image analysis system Performance measure: correct image categorization Environment: downlink from orbiting satellite Actuators: display categorization of scene Sensors: color pixel arrays 4. PEAS for a part-picking robot Performance measure: Percentage of parts in correct bins Environment: Conveyor belt with parts, bins Actuators: Jointed arm and hand Sensors: Camera, joint angle sensors 5. PEAS for a refinery controller Performance measure: maximize purity, yield, safety Environment: refinery, operators Actuators: valves, pumps, heaters, displays Sensors: temperature, pressure, chemical sensors 6. PEAS for Interactive English tutor Performance measure: Maximize student's score on test Environment: Set of students Actuators: Screen display (exercises, suggestions, corrections) Sensors: Keyboard Environment types or attributes of environment: 1. Fully observable vs. partially observable: An agent can be considered to be an agent only if it has the ability to observe its environment. Fully observable environments are convenient, because the agent need not maintain any internal state to keep track of the world An environment might be partially observable because of noisy and inaccurate sensors or because parts of the state are simply missing from the sensor data Example: The game Solitaire is observable while internet shopping is not observable. 2. Deterministic, stochastic & strategic: A fully deterministic environment is the one where next state or future state of the environment is completely determined by the current state and the actions of the agent. An environment is stochastic (partially observable) if there is some element of uncertainty or outside influence involved. Examples: Solitaire is deterministic while taxi driver is not and Internet Shopping is Partly deterministic A strategic environment is fully determined by the preceding state combined with the actions of multiple agents 3. Episodic vs. sequential: In episodic environments, the agent's experience is divided into atomic "episodes" (each episode consists of the agent perceiving and then performing a single action), and the choice of action in each episode depends only on the episode itself. The task environment is episodic if each of the agents task do not rely on past performance, or cannot affect future performance. Examples: classification tasks In sequential environments, the current decision could affect all future decisions Examples: chess and taxi driver 4. Static vs. dynamic: A static environment does not change. 10

Static environments are easy to deal with because the agent need not keep looking at the world while it is deciding on the action or need it worry about the passage of time Example for static: Solitaire and crossword puzzles are static. Dynamic environments continuously ask the agent what it wants to do The environment is semi-dynamic if the environment itself does not change with the passage of time but the agent's performance score does Examples for dynamic and semi dynamic: taxi driving is dynamic, chess when played with a clock is semi-dynamic 5. Discrete vs. continuous: A discrete environment has a finite number of possible states, whereas the number of states in a continuous environment is finite. Example for Discrete: 1. Chess has finite number of discrete states, and has discrete set of percepts and actions. 2. Solitaire Game and Internet shopping are discrete. Example for continuous: 1. Taxi driving has continuous states, and actions 6. Single agent vs. multiagent: An agent operating by itself in an environment is single agent A multiple-agent environment the agent that acts cooperatively or competitively with another agent. Examples: Crossword is a single agent while chess is two-agents The environment type largely determines the agent design. The real world is (of course) partially observable, stochastic, sequential, dynamic, continuous, multi-agent

Question: Does an agent A have to treat an object B as an agent or can it be treated as a stochastically behaving object Whether B's behavior is best described by as maximizing a performance measure whose value depends on agent's A behavior Examples: chess is a competitive multiagent environment while taxi driving is a partially cooperative multiagent environment

THE STRUCTURE OF AGENTS: List & Explain the types of agents?Write a note on simple reflex agent. List the types of agent program and explain any two in detail. All agents have the same basic structure: 11

(5 & 10 marks)

accept percepts from environment generate actions Agent functions: An agent is completely specified by the agent function mapping percept sequences to actions One agent function (or a small equivalence class) is rational. Aim: find a way to implement the rational agent function concisely -> design an agent program Agent = agent program + architecture Architecture: some sort of computing device with physical sensors and actuators (PC, robotic car) should be appropriate: walk action requires legs. Agent program: Takes the current percept as input from the sensors and return an action to the actuators. While agent function takes the whole percept history, agent program takes just the current percept as input which the only available input from the environment. There are two types of agent programs: A Skeleton Agent: The agent program receives only a single percept as its input. If the percept is new input then the agent updates the memory with the new percept A Table driven Agent: A table which consists of indexed percept sequences with its corresponding action. The input percept checks the table for the same percept and performs its corresponding action. Drawbacks: Huge table (P^T , P: set of possible percepts, T: lifetime) Space to store the table Take a long time to build the table Practically impossible Even with learning, need a long time to learn the table entries function TABLE-DRIVEN-AGENT(percept) returns an action static: percepts, a sequence, initially empty table, a table of actions, indexed by percept sequences, initially fully specified append percept to the end of percepts action LOOKUP(percepts, table) return action Agent types: To implement the mapping from percepts to actions, 4 types of agent programs are used i)Simple Reflex agent ii) Model-based Reflex agent iii) Goal-Based agent iv) Utility-based agent i) Simple reflex agents: Based on condition-action rules and implemented with an appropriate production system. They are stateless devices which do not have memory of past world states. Agent works by finding a rule whose condition matches the current situation rule-based systems It cannot be implemented as table-lookup: the percepts are too complex. So abstract of the table has to be created by coding common input/output associations. The abstract is prepared with a list of condition/action rules.

12

Figure: Schematic diagram of a simple reflex agent But, this only works if the current percept is sufficient for making the correct decision(works only if the environment is observable) Simple-reflex agents are simple, but they turn out to be of very limited intelligence Example: 1--Vacuum Agent In this problem decision is made only on the current location & on whether it contains dirt. function REFLEX-VACUUM-AGENT([location, status]) returns an action if status = Dirty then return Suck else if location = A then return Right else if location = B then return Left Example: 2 Consider a taxi-driver, if a car in-front brakes and its brake lights come on, then brake has to be initiated.

It is done by a condition action rule if car in front is braking (condition) then initiate braking (Action) Rectangle Denotes the current internal state of agents decision process Oval Denotes the background information used in the process. function SIMPLE-REFLEX-AGENT(percept) returns an action static: rules, a set if condition-action rules state INTERPRET_INPUT(percept) rule RULE_MATCH(state, rules) action RULE_ACTION[rule] return action INTERPRET-INPUT: It generates a description about the current state from the percept. RULE-MATCH: A function which returns the first rule in the set of rules that matches with the state description. RULE-ACTION: The selected rule is executed as action of the given percept. The above function will work only if the environment is fully observable. So, Unobservability cause serious trouble. 13

Infinite Loops are unavoidable for simpler reflex agents operating in partially observable environments. Model-based reflex agents: The agent should keep track of the part of the world it can't see now The agent should maintain some sort of internal state that depends on the percept history and reflects at least some of the unobserved aspects of the current state Update internal state information, which has 2 kinds of knowledge: Information about how the world evolves independently of the agent Information about how the agent's own actions affects the world Example: AUTOMATED TAXI The agent needs to keep track of, where the other cars are, if it cant see them all at once.

function REFLEX-AGENT-WITH-STATE(percept) returns an action static: state, a description of the current world state rules, a set of condition-action rules action, the most recent action, initially none state UPDATE_INPUT(state, action, percept) rule RULE_MATCH(state, rules) action RULE_ACTION[rule] return action The current percept combined with the old internal state to generate updated description of current state. The function UPDATE-STATEFor creating the new internal state description. This agent also uses information about how the world evolves to keep track of unseen parts of the world. Goal-based agents: If the agents have knowledge about current state of the environment then it needs some sort of goal information to choose the action, then the agent is called goal based agent. The agent program can combine this with information about the results of possible actions in order to choose to achieve the goal.

14

Example: At a road junction, taxi can turn left, turn right or go straight. The correct decision depends on the goal state or where to go Sometimes, the goal can be reached from a single action. Sometimes, agent has to consider long sequences of twists and turns to find a way to achieve goal Searching & Planning: It is used to find the way for the goal state (i.e) finds sequence of actions that achieve the goal state. Example: 1. Automated Taxi a. Goal based agent slows down the car before it does braking whereas the reflex agent simply brakes after seeing brake light in the front car. b. If it starts to rain while driving car The goal based agent can update its knowledge how effectively it can operate the brakes but for reflex agent many condition action rule has to be rewritten. Goal-based agents vs reflex-based agents: Although goal-based agents appears less efficient, it is more flexible because the knowledge that supports its decision is represented explicitly and can be modified On the other hand, for the reflex-agent, we would have to rewrite many condition-action rules The goal based agent's behavior can easily be changed The reflex agent's rules must be changed for a new Situation Utility-based agents: An agent that generates a goal state with high quality behavior in most environments is called utility based agents. Usually, there are several possible actions that can be taken in a given situation. In such cases, the utility of the next achieved state can come into consideration to arrive at a decision. Utility Function: A utility function maps a state (or a sequence of states) onto a real number which describes the associated degree of happiness. Happy Utility (the quality of being useful) The agent can also use these numbers to weigh the importance of competing goals. Example: Utility-based Taxi-Driver Agent Goal: To reach the destination. The utility function helps the agent to reach the goal state quickly, safely, reliably and cheaply.

15

Rational Decisions in 2 kinds of cases: When there are conflicting goals, only some of them can be achieved by utility function (speed,safety) When there are several goals, none of which can be achieved with certainty, then the success can be weighed against the importance of the goals o By maximizing the utility function and to possesses an explicitly utility function rational decisions can be made. Learning agents Turing instead of actually programming intelligent machines by hand, which is too much work, build learning machines and then teach them Learning task allows the agent to operate in initially unknown environments and to become more competent than its initial knowledge alone might allow

Four conceptual components of learning agent: Learning element responsible for making improvements. Learning element uses feedback from the critic on how the agent is doing and determines how the performance element should be modified. Performance element responsible for selecting external actions. Performance element would keep closing the actions that are best performance element consists of collection of knowledge and procedures for selecting actions. Critic Critic gives feedback to the learning element on how the agent is doing and determines how the performance element should be modified to do better in the future. Problem generator is responsible for suggesting actions that will lead to a new and informative Experiences 16

PROBLEM SOLVING AGENTS: Problem solving agent is one type of goal based agent, where there are sequence of actions, the agent should select from these actions which lead to describe states. If the agent understands the definition of a problem, then we have to search the finding solutions which imply agent should maximize the performance measures. Problem types: Deterministic, fully observable = single-state problem Agent knows exactly which state it will be in; solution is a sequence Non-observable = conformant problem Agent may have no idea where it is; solution (if any) is a sequence Nondeterministic and/or partially observable = contingency problem percepts provide new information about current state solution is a tree or policy often interleave search, execution Unknown state space = exploration problem (online) Steps to maximize the performance measure: Goal formulation based on the current situation and the agent performance measure is the first step in problem solving. Problem formulation is the process of deciding what actions and states to consider is given as a goal. Search: An agent with many options of unknown value can examine different possible sequence of actions that lead to states of known value and then choose best one from the sequence. The process of looking for such a sequence is called search. Solution: A search algorithm takes a problem as input and returns a solution in the form of an action sequence. Execution: If there is a solution, the action it recommends can be carried out. This is called execution phase. Once the solution has been executed, the agent will formulate a new goal. Function SIMPLE PROBLEM SOLVING AGENT(perecpt) returns and action Inputs: percept, a percept Static: seq,an action sequence initially empty state, some description of the current world state goal , a goal, initially null problem ,a problem formulation stateUPDATE_STATE(state,percept) If seq is empty then do Goal FORMULATE _GOAL(state) Problem FORMULATE-PROBLEM(state, goal) SeqSEARCH(problem) ActionFIRST(req) Seq RESET(seq) return action PROBLEM SOLVING AGENT Agents in different environment: Static Changes that occur is not paid attention. Observable The initial state is known. Discrete Due to the presence of alternative courses (or) sequences of action. Deterministic Solutions are single sequences of actions. Examples of problem solving: Problem: On holiday in Romania; currently in Arad. Flight leaves tomorrow from Bucharest. Find a short route to drive to Bucharest. Formulate problem: 17

States: various cities Actions: drive between cities Solution: sequence of cities, e.g., Arad, Sibiu, Fagaras, Bucharest

Well defined problems and solutions: A problem is defined by four items: Initial state: The starting state which agent knows itself. Successor function: A description of the possible actions available to the agent. State x, Successor FN(x) returns of a set of < action, successor > ordered pairs, Where each actions is a legal action in state x and each successor is a state that can be reached from x by applying the action. State Space: The set of all states reachable from the initial state by any sequence of action. The initial state and successor function defines the state space. The state space forms a graph in which modes are states and axes between the nodes are actions. Path: A path in the state space is a sequence of state connected by a sequence of actions. Goal test: Determines the final state. If there is an explicit set of possible goal states, then we can check whether any one of the goal state is reached or not. Eg: In chess, the goal is to reach a state called Checkmate where the opponents king is under attack and cant escape Path cost: A function that assigns a numeric cost to each path. The cost of a path can be described as the sum of the costs of the individual actions along that path. Step cost of taking an action a to go from one state x to state y is denoted by C(x,a,y) C Cost x,y States a Action Step cost is non-negative A solution to a problem is a path from initial state to a goal state. Solution quality: Measured by path cost function. An optimal solution has lowest path cost among all solutions. Total Cost = Path Cost + Search FORMULATING PROBLEMS Formulating Problem refers to finding the steps to reach the solution. In the graph Each node represents a possible state; A node is designated as the initial state; One or more nodes represent goal states, states in which the agents goal is considered accomplished. Each edge represents a state transition caused by a specific agent action; Associated to each edge is the cost of performing that transition. For a given strategy, what is the order of nodes to be generated (or stored), and expanded? With or without checking duplicated nodes? Cost

18

A Problem is divided by four items: Initial State: E.g: At Arad: Actions: S(x) = set of actionstate pairs E.g: S(Arad)={<AradZerind, Zerind>,} Goal Test: Can be Explicit: e.g: At Bucharest Implicit: e.g: CheckMate(x) Path Cost(Additive): e.g: Sum of distances, number of actions executed, etc. Solution: Is a sequence of actions leading from the initial state to a goal state. Abstraction: The process of removing detail from a representation is called Abstraction. In the above route finding problem, the citys action has so many things: The travelling companions The scenery out of the window How far is to the next stop The condition of road The weather These are irrelevant to find a way to Bucharest, so they are left out. The choice of good abstraction involves removing as much detail as possible while retaining validity and ensuring that abstract actions are easy to carry out. EXAMPLE PROBLEMS: 2 categories A toy problem is intended to illustrate or exercise various problem solving methods. A real world problem is one whose solutions people actually care about. Toy problems: It is given a concise, exact description. Toy problems are used bt researchers to compare the performance of algorithm. a. vacuum world: The agents world (environment) is representable by a discrete set of states. The agents actions are representable by a discrete set of operators.
The world is static and deterministic.

States The agent is in one of two locations, each of which might or might not contain dirt. Thus there are 2 x 22 = 8 possible world states.

19

Initial State Any state can be designated as the initial state. Actions OR Successor Function This generates the legal states that result from trying the three actions (left, right a suck). State space is shown above. Goal Test This checks whether all the squares are clean. Path Cost o Each step costs 1, so path cost is the number of steps in the path. o Toy problem has discrete locations, discrete dirt, reliable cleaning and it never gets messed up (i.e)
confused once cleaned.

o The environment with n location has n2" states. The 8-puzzle:

In the 8-puzzle problem we have a 33 square board and 8 numbered tiles. The board has one blank position. We can alternatively and equivalently look upon this as the movement of the blank position up, down, left or right. The objective of this puzzle is to move the tiles starting from an initial position and arrive at a given goal configuration. The 15-puzzle problems is similar to the 8-puzzle. It has a 44 square board and 15 numbered tiles. The state space representation for this problem is summarized below: States: A state is a description of each of the eight tiles in each location that it can occupy. Operators/Action: The blank moves left, right, up or down Goal Test: The current state matches a certain state (e.g. one of the ones shown on previous slide) Path Cost: Each move of the blank costs 1

20

Eg: 8-Puzzle Partial state space

8-Puzzle belongs to sliding block puzzles which is used as test problems for new search algorithms. The 8-puzzle problem has 9!/2 = 181,440 reachable stages and is easily solve The 15-puzzle (4 x 4 board) has around 1.3 trillion states, and random instance can be solved optimally in a few milliseconds
by the best search algorithms. and algorithms.

The 24-puzzle ( 5 x 5 board) has around 1025 states, and random instances are difficult to solve optimally by current machine Eight queens problem: n-Queens problem: The problem is to place 8 queens on a n-by-n chessboard so that no two queens attack each other by being in the same row or in the same column or on the same diagonal.
Q Q Q Q Q Q Q Q 2 Kinds of Formulation:

1. Incremental formulation: Starts with an empty board and each state places a queen in each row 2. Complete- state formulation: Starts with 8 queens in the board and moving them around the board. a) Problem components for Incremental Formulation: States: Any arrangements of 0 to 8 queens on the board. Initial State: No Queens on the board. Successor Function: Add a Queen to an empty space. Goal Test: 8 queens are on the board but none of them should attack each other. b) The Problem Components for Complete State Formulation States: Arrangement of n queens (0 < n < 8), one per column in the leftmost n columns, with no queen attacking another. Successor Function: Add a queen to any square in the leftmost empty column such that it is not attacked by any other queen. This formulation reduces the state space from 3 x 10 14 for the earlier formulation to just 2,057 states and
solutions are easy to find. But for 100 queens, the initial formulations 10 400 states whereas the improved formulation has 1 0 52 states. This is a huge reduction. (UNIT-I To be Continued..)

Real world problems: It is a problem where, people actually care for solutions. 21

a)

Route Finding :(computer networks, airline travel planning system, . . . ) Travelling Salesman Optimization Problem:(package delivery, automatic drills, . . . ) Layout Problems:(VLSI layout, furniture layout, packaging, . . . ) Assembly Sequencing:(assembly of electric motors, . . . ) Task Scheduling:(manufacturing, timetables, . . . ) Airline Travel Problem: States Each state is represented by a location and the current time. Initial state This is specified by the problem. Successor Function This returns the states resulting from taking any scheduled flight, leavin g later than the current time
plus the transmit time within airport, from the current airport to

Goal Test Check whether destination is reached in prescribed time. Path Cost This depends on Monetary Cost Waiting Time Flight Time Customs and immigration procedures Time of Day Type of Airplane Frequent-Flyer mileage awards & so on. SEARCHING FOR SOLUTIONS: Explain about the searching for solutions? Search tree: That is generated by the initial state and the successor function that together define the state space. Search node: The root of the search tree is a search node corresponding to the initial state In(Arad). Expanding: That is applying the successor function to the current state thereby generating a new set of states. The choice of which state to expand is called search strategy. The data structures for the nodes are: State: The state in the state space to which the node corresponds Parent node: The node in the search tree that generated the node Action: The action that was applied to the parent to generate the node Path cost: The cost g(n) of the path from the initial state to the node Depth: The number of steps along the path from the initial state.

22

The basic algorithm for tree search is:

Measuring problem-solving performance: How to evaluate the performance of the algorithm (problem-solving algorithm)? The output of the problem-solving algorithm can be either failure or solution. The performance measurements are: Completeness: Does it always find a solution when there is one? Optimality: Does the strategy find the optimal solution? Time complexity: How long does it take to find a solution? Space complexity: How much memory is needed to perform the search?

23

Implementation: states vs. nodes: A state is a (representation of) a physical configuration A node is a data structure constituting part of a search tree includes parent, children, depth, path cost g(x) States do not have parents, children, depth, or path cost!

The EXPAND function creates new nodes, filling in the various fields and using the SUCCESSORFN of the problem to create the corresponding states. Tree Search Algorithms: Basic idea: The collection of nodes that have been generated and not expanded is called fringe. Each element in a fringe is a leaf node. The data structure for implementing tree search is queue. The operations of queue are: MAKE-QUEUE(element,): Creates a queue with the given elements EMPTY(queue): Returns true only if there are no more elements in the queue. FIRST(queue): Returns the first element of the queue. REMOVE-FIRST(queue): Returns FIRST(queue) and removes it from the queue. INSERT(element,queue): Inserts an element into a queue and returns the resulting queue. INSERT-ALL(element,queue): Inserts a set of elements into the queue and returns the resulting queue.

24

(UNIT-I To be Continued.,) UNIFORMED SEARCH STRATEGIES: Explain about the uniformed search strategies. Uniformed search is called blind-search since Uninformed strategies use only the information available in the problem definition. All the search algorithms make the following assumption: A node can be expanded by the procedure Expand The state space graph is a tree There is only one start state and there is a unique path from each node Whenever a node is expands/children, there need a mechanism to backtrack to reach its parent So that when the goal state is reached, the path can be traced at any time Time and space complexity are measured in terms of bmaximum branching factor of the search tree ddepth of the least-cost solution mmaximum depth of the state space (may be ) Breadth-First Search BFS is a simple strategy in which the root node is expanded first, then all the successor of the root node are expanded next, then their successors and so on. In general all the nodes at the same level in the search tree are expanded before any nodes at the next level are expanded. Algorithm: Breadth first search 1. Create a variable called Node List and set it to the initial stage 2. Until a goal state is found or NODE-LIST is empty do: a. Remove the first element from Node_List and call it E. If NODE_LIST was empty quit b. For each way that each rule can match the state described in E do: 1. Apply the rule to generate a new state 2. If the new state is a goal state, quit and return this state Otherwise add the new state to the end of NODE_LIST. 25

Enqueue nodes on nodes in FIFO (first-in, first-out) order. Complete Optimal (i.e., admissible) if all operators have the same cost. Otherwise, not optimal but finds solution with shortest path length. Exponential time and space complexity, O(bd), where d is the depth of the solution and b is the branching factor (i.e., number of children) at each node Will take a long time to find solutions with a large number of steps because must look at all shorter length possibilities first A complete search tree of depth d where each non-leaf node has b children, has a total of 1 + b + b2 + ... + bd = (b(d+1) - 1)/(b-1) nodes For a complete search tree of depth 12, where every node at depths 0, ..., 11 has 10 children and every node at depth 12 has 0 children, there are 1 + 10 + 100 + 1000 + ... + 1012 = (1013 1)/9 = O(1012) nodes in the complete search tree. If BFS expands 1000 nodes/sec and each node uses 100 bytes of storage, then BFS will take 35 years to run in the worst case, and it will use 111 terabytes of memory! Expanded node Nodes list { S0 } 0 S { A 3 B1 C8 } A3 { B1 C8 D6 E10 G18 } 1 B { C8 D6 E10 G18 G21 } 8 C { D6 E10 G18 G21 G13 } D6 { E10 G18 G21 G13 } 10 E { G18 G21 G13 } G18 { G21 G13 } Solution path found is S A G , cost 18 Number of nodes expanded (including goal node) = 7 Uniform-Cost Search: BFS expands the shallowest unexpanded node Uniformed cost search expands the node n with the lowest path cost UCS does not care about the number of steps, but only about the total cost But it gets expands a node that a zero-cost leading to the same state. Example: NoOp UCS is guided by path cost rather than depths Complete: Yes, if step cost Time: # of nodes with g cost of optimal solution, O(bC*/) where C is the cost of the optimal solution and Q is a positive constant. Expanded node Nodes list { S0 } 0 S { B1 A3 C8 } B1 { A3 C8 G21 } 3 A { D6 C8 E10 G18 G21 } D6 { C8 E10 G18 G1 } 8 C { E10 G13 G18 G21 } 10 E { G13 G18 G21 } G13 { G18 G21 } Solution path found is S B G, cosst 13 26

Number of nodes expanded (including goal node) = 7 Depth-First Search DFS expands the deepest node in the current fringe of the search tree. The search proceeds immediately to the deepest level of the search tree When there is no successor the search backups to the next shallowest node that still has the unexplored successors. Enqueue nodes on nodes in LIFO (last-in, first-out) order. That is, nodes used as a stack data structure to order nodes. Properties of DFS: May not terminate without a depth bound, i.e., cutting off search below a fixed depth D ( depthlimited search) Not complete (with or without cycle detection, and with or without a cutoff depth) Exponential time, O(bd), but only linear space, O(bd) Can find long solutions quickly if lucky (and short solutions slowly if unlucky!) When search hits a deadend, can only back up one level at a time even if the problem occurs because of a bad operator choice near the top of the tree. Hence, only does chronological backtracking Expanded node Nodes list { S0 } 0 3 1 S { A B C8 } A3 { D6 E10 G18 B1 C8 } 6 D { E10 G18 B1 C8 } 10 E { G18 B1 C8 } G18 { B 1 C8 } Solution path found is S A G, cost 18 Number of nodes expanded (including goal node) = 5

27

Depth-Limited Search: Depth limited search determines the depth limit of DFS as L i.e., nodes at depth L have no successors. DLS solves infinite-path problem. If the depth L < D or L > D, the solution may not be optimal i.e in the first case the goal may not be found, in the second case the solution is not optimal. Time complexity is O(bL) and space complexity is O(bL). function DEPTH-LIMITED-SEARCH (problem, limit) return soln/fail/cutoff return Recursive-DLS(Make-Node(INITIAL-STATE(problem)), problem, limit) end function function RECURSIVE-DLS (node, problem, limit) return soln/fail/cutoff cutoff-occurred := false; if (Goal-State(problem, STATE(node))) then return node; else if (DEPTH(node) == limit) then return cutoff; else for each successor in EXPAND(node, problem) do result := RECURSIVE-DLS(successor, problem, limit) if (result == cutoff) then cutoff-occurred := true; else if (result != fail) then return result; end for if (cutoff-occurred) then return cutoff; else return fail; end function Iterative Deepening Search: Iterative deepening search is a combines the benefit of DFS and BFS. It gradually increases the limit, first 0, then 1, then 2 and so on until the goal is found. Like DFS the memory requirement is very modest and like BFS its complete when the branching factor is finite. Algorithm: First do DFS to depth 0 (i.e., treat start node as having no successors), then, if no solution found, do DFS to depth 1, etc. until solution found do DFS with depth cutoff c c = c+1 Properties of Depth first limited search: Complete 28

Optimal/Admissible if all operators have the same cost. Otherwise, not optimal but guarantees finding solution of shortest length (like BFS). Time complexity is a little worse than BFS or DFS because nodes near the top of the search tree are generated multiple times, but because almost all of the nodes are near the bottom of a tree, the worst case time complexity is still exponential, O(bd) If branching factor is b and solution is at depth d, then nodes at depth d are generated once, nodes at depth d-1 are generated twice, etc. Time: (d + 1)b0 + db1 + (d 1)b2 + . . . + bd = O(bd) Numerical comparison for b = 10 and d = 5, solution at far right: N(IDS) = 50 + 400 + 3, 000 + 20, 000 + 100, 000 = 123, 450 N(BFS) = 10 + 100 + 1, 000 + 10, 000 + 100, 000 + 999, 990 = 1, 111, 100 Linear space complexity, O(bd), like DFS Has advantage of BFS (i.e., completeness) and also advantages of DFS (i.e., limited space and finds longer paths more quickly) Generally preferred for large state spaces where solution depth is unknown

Bidirectional search: Bidirectional search searches in two simultaneous way-one forward form the initial state and the other backward form the goal, stopping when the two searches meet in the middle. Works well only when there are unique start and goal states. Requires the ability to generate predecessor states. Can (sometimes) lead to finding a solution more quickly.

29

Comparing uniformed search strategies:

AVOIDING REPEATED STATES: How to avoid repeated states? Explain. Repeated states represent the expansion of same states which has previously expanded which increases the state space, cost and order. In order to avoid it: 1. Do not return to the state you just came from. 2. Do not create paths with cycles in them. 3. Do not generate any state that was ever created before. Net effect depends on frequency of loops in state space. Rectangular grid: How many different states are reachable within a path of length d?. Closed list: Tree structure algorithm to include a data structure called the closed list. which stores every closed list. Open list: the fringe of unexpanded nodes is sometimes called the open list. Failure to detect repeated states can turn a linear problem into an exponential one!

(a) a state space in which there are two possible actions leading form A to B two from B to C and so on. The state space contains d+1 states where d is the maximum depth. (b) The corresponding search tree which has 2d paths through the space. (c) A rectangular grid space state within 2 steps of the initial state(a) are shown in gray.

References: 1. Artificial Intelligence Agents and Environments Author William John Teahan(2010) {E-book: bookboon.com} 2. Artificial Intelligence- Author -- Sudha Sridhar 30

Você também pode gostar