Você está na página 1de 11

ARTIFICIAL

INTELLIGENCE
Agent and Environment

AGENT AND ENVIRONMENT


Anagentis anything that can perceive
itsenvironmentthroughsensorsand acts upon that environment
througheffectors.
Ahuman agenthas sensory organs such as eyes, ears, nose, tongue
and skin parallel to the sensors, and other organs such as hands, legs,
mouth, for effectors.
Arobotic agentreplaces cameras and infrared range finders for the
sensors, and various motors and actuators for effectors.
Asoftwareagent has encoded bit strings as its programs and actions.

AGENT TERMINOLOGY
Performance Measure of Agent It is the criteria, which determines
how successful an agent is.
Behavior of Agent It is the action that agent performs after any given
sequence of percepts.
Percept It is agents perceptual inputs at a given instance.
Percept Sequence It is the history of all that an agent has perceived
till date.
Agent Function It is a map from the precept sequence to an action.

An ideal rational agent is the one, which is capable of doing expected


actions to maximize its performance measure, on the basis of
Its percept sequence
Its built-in knowledge base
Rationality of an agent depends on the following four factors
Theperformance measures, which determine the degree of success.
AgentsPercept Sequencetill now.
The agentsprior knowledge about the environment.
Theactionsthat the agent can carry out.

AGENT TYPES
Simple Reflex Agents
They choose actions only based on the current percept.
They are rational only if a correct decision is made only on the
basis of current precept.
Their environment is completely observable.

MODEL BASED REFLEX AGENTS


They use a model of the world to choose
their actions. They maintain an internal
state.
Model The knowledge about how the
things happen in the world.
Internal State It is a representation of
unobserved aspects of current state
depending on percept history.
Updating the state
information about

requires

the

How the world evolves.


How the agents actions affect the world.

GOAL BASED AGENTS


They choose their actions in order
to
achieve
goals.
Goal-based
approach is more flexible than
reflex agent since the knowledge
supporting a decision is explicitly
modeled, thereby allowing for
modifications.
Goal It is the description of
desirable situations.

UTILITY BASED AGENTS


They choose actions based on a preference (utility) for each state. Goals
are inadequate when
There are conflicting goals, out of which only few can be achieved.
Goals have some uncertainty of being achieved and you need to weigh
likelihood of success against the importance of a goal.

ENVIRONMENT
Discrete / Continuous If there are a limited number of distinct, clearly
defined, states of the environment, the environment is discrete (For
example, chess); otherwise it is continuous (For example, driving).
Observable / Partially Observable If it is possible to determine the
complete state of the environment at each time point from the percepts it
is observable; otherwise it is only partially observable.
Static / Dynamic If the environment does not change while an agent
is acting, then it is static; otherwise it is dynamic.

Single agent / Multiple agents The environment may contain other agents
which may be of the same or different kind as that of the agent.
Accessible / Inaccessible If the agents sensory apparatus can have access
to the complete state of the environment, then the environment is accessible to
that agent.
Deterministic / Non-deterministic If the next state of the environment is
completely determined by the current state and the actions of the agent, then
the environment is deterministic; otherwise it is non-deterministic.
Episodic / Non-episodic In an episodic environment, each episode consists of
the agent perceiving and then acting. The quality of its action depends just on
the episode itself. Subsequent episodes do not depend on the actions in the
previous episodes. Episodic environments are much simpler because the agent
does not need to think ahead.

Você também pode gostar