Você está na página 1de 6

Introduction:

Evidence of Artificial Intelligence folklore can be traced back to ancient Egypt, but with the development
of the electronic computer in 1941, the technology finally became available to create machine intelligence.
The term artificial intelligence was first coined in 1956, at the Dartmouth conference, and since then
Artificial Intelligence has expanded because of the theories and principles developed by its dedicated
researchers. Through its short modern history, advancement in the fields of AI have been slower than first
estimated, progress continues to be made. From its birth 4 decades ago, there have been a variety of AI
programs, and they have impacted other technological advancements.
Artificial Intelligence, or AI for short, is a combination of computer science, physiology, and
philosophy. AI is a broad topic, consisting of different fields, from machine vision to expert
systems. The element that the fields of AI have in common is the creation of machines that can
"think".

In order to classify machines as "thinking", it is necessary to define intelligence. To what degree


does intelligence consist of, for example, solving complex

problems, or making generalizations and relationships? And what about perception and
comprehension? Research into the areas of learning, of language, and of sensory perception have
aided scientists in building intelligent machines. One of the most challenging approaches facing
experts is building systems that mimic the behavior of the human brain, made up of billions of
neurons, and arguably the most complex matter in the universe. Perhaps the best way to gauge
the intelligence of a machine is British computer scientist Alan Turing's test. He stated that a
computer would deserves to be called intelligent if it could deceive a human into believing that it
was human.

Artificial Intelligence has come a long way from its early roots, driven by dedicated researchers.
The beginnings of AI reach back before electronics,

to philosophers and mathematicians such as Boole and others theorizing on principles that
were used as the foundation of AI Logic. AI really began to intrigue researchers with the
invention of the computer in 1943. The technology was finally available, or so it seemed, to
simulate intelligent behavior. Over the next four decades, despite many stumbling blocks, AI has
grown from a dozen researchers, to thousands of engineers and specialists; and from programs
capable of playing checkers, to systems designed to diagnose disease.

AI has always been on the pioneering end of computer science. Advanced-level computer
languages, as well as computer interfaces and word-processors owe their existence to the research
into artificial intelligence. The theory and insights brought about by AI research will set the trend
in the future of computing. The products available today are only bits and pieces of what are soon
to follow, but they are a movement towards the future of artificial intelligence. The advancements
in the quest for artificial intelligence have, and will continue to affect our jobs, our education, and
our lives.

artificial intelligence is making machines "intelligent" -- acting as we would expect people to act.
The Beginnings of AI:

Although the computer provided the technology necessary for AI, it was not until the early 1950's that
the link between human intelligence and machines was really observed. Norbert Wiener was one of the
first Americans to make observations on the principle of feedback theory feedback theory. The most
familiar example of feedback theory is the thermostat: It controls the temperature of an environment by
gathering the actual temperature of the house, comparing it to the desired temperature, and responding by
turning the heat up or down. What was so important about his research into feedback loops was that
Wiener theorized that all intelligent behavior was the result of feedback mechanisms. Mechanisms that
could possibly be simulated by machines. This discovery influenced much of early development of AI.

Artificial intelligence (AI) is both the intelligence of machines and the branch of
computer science which aims to create it.

Major AI textbooks define artificial intelligence as "the study and design of intelligent
agents," where an intelligent agent is a system that perceives its environment and
takes actions which maximize its chances of success. AI can be seen as a realization
of an abstract intelligent agent (AIA) which exhibits the functional essence of
intelligence. John McCarthy, who coined the term in 1956, defines it as "the science
and engineering of making intelligent machines."

Among the traits that researchers hope machines will exhibit are reasoning,
knowledge, planning, learning, communication, perception and the ability to move
and manipulate objects. General intelligence (or "strong AI") has not yet been
achieved and is a long-term goal of AI research.

In the middle of the 20th century, a handful of scientists began a new approach to
building intelligent machines, based on recent discoveries in neurology, a new
mathematical theory of information, an understanding of control and stability called
cybernetics, and above all, by the invention of the digital computer, a machine
based on the abstract essence of mathematical reasoning.

The field of modern AI research was founded at conference on the campus of


Dartmouth College in the summer of 1956. Those who attended would become the
leaders of AI research for many decades, especially John McCarthy, Marvin Minsky,
Allen Newell and Herbert Simon, who founded AI laboratories at MIT, CMU and
Stanford. They and their students wrote programs that were, to most people, simply
astonishing: computers were solving word problems in algebra, proving logical
theorems and speaking English. By the middle 60s their research was heavily funded
by the U.S. Department of Defense and they were optimistic about the future of the
new field:

1965, H. A. Simon: "[M]achines will be capable, within twenty years, of doing any
work a man can do"
1967, Marvin Minsky: "Within a generation ... the problem of creating 'artificial
intelligence' will substantially be solved."

These predictions, and many like them, would not come true. They had failed to
recognize the difficulty of some of the problems they faced. In 1974, in response to
the criticism of England's Sir James Lighthill and ongoing pressure from Congress to
fund more productive projects, the U.S. and British governments cut off all
undirected, exploratory research in AI. This was the first AI Winter.

In the early 80s, AI research was revived by the commercial success of expert
systems (a form of AI program that simulated the knowledge and analytical skills of
one or more human experts) and by 1985 the market for AI had reached more than
a billion dollars. Minsky and others warned the community that enthusiasm for AI
had spiraled out of control and that disappointment was sure to follow. Beginning
with the collapse of the Lisp Machine market in 1987, AI once again fell into
disrepute, and a second, more lasting AI Winter began.

In the 90s and early 21st century AI achieved its greatest successes, albeit
somewhat behind the scenes. Artificial intelligence was adopted throughout the
technology industry, providing the heavy lifting for logistics, data mining, medical
diagnosis and many other areas. The success was due to several factors: the
incredible power of computers today (see Moore's law), a greater emphasis on
solving specific subproblems, the creation of new ties between AI and other fields
working on similar problems, and above all a new commitment by researchers to
solid mathematical methods and rigorous scientific standards.

Consciousness, Nets and the Future

What do we mean when we say someone is intelligent? Is it that they are, for example, very good at
mathematics or translating foreign languages? These people are certainly good at understanding and
manipulating abstract concepts. But what about poets, novelists and musicians? They are clearly intelligent
because they are creative. Indeed, intelligence is visible in almost every form of human activity - ability to
adapt, learn new skills, form complex relationships and societies. Much of this appears to be unique to
humans (at least on Earth) and differentiates us from all other species. We might say that all of these
aspects of our lives and behavior can be attributed to the fact that we are conscious.

Unfortunately, there is no precise, widely agreed upon definition of the word consciousness. However,
most of us have an intuitive sense of what is meant by the term. Consciousness, or cognition, is a sort of
awareness - of self, of interaction with the world, of thought processes taking place, and of our ability to at
least partially control these processes. We also associate consciousness with an inner voice that expresses
our high level, deliberate, thoughts, as well as intentionality and emotion. It seems doubtful whether true
intelligence can ever arise in the absence of consciousness. Perhaps, one might take the view that
intelligent behavior is the outward sign of a conscious being. If so any machine which could display
human-like intelligence qualities could be said to be conscious.

This point of view was taken by Alan Turing, who in 1950 invented a test whose result could be used to
determine whether, in any practical sense, a machine could be said to be conscious.

The test is quite simple. You enter a room and encounter two terminals: one terminal connects with a
computer, and the other interfaces with a person who types responses. The goal of the test is for you to
determine which terminal is connected with the computer. You are allowed to ask questions, make
assertions, question feelings and motivations for as long as you wish. If you fail to determine which
terminal is communicating with the computer or guess that the computer is the human, the computer has
passed the test and can be said to be `conscious'.

Turing invented his test at a time when it was thought that mind-like computers might be only fifty years
away. A whole new science was born with the aim of producing such intelligent machines - the subject of
artificial intelligence or AI.

In fact that has not happened - initial efforts to create computers with mind-like reasoning have failed
miserably. Many researchers now believe that part of the reason for this failure was that traditional
computers function in a way very different from the brain and that the key to true intelligent machines lies
in understanding in detail the functioning of the brain and emulating this with artificial neural networks.

Needless to say this view is not held by all - some philosophers maintain that the phenomenon of
consciousness cannot be ascribed to purely physical processes (the cooperative firing of networks of
neurons) and is in principle inaccessible to arbitrarily advanced scientific assault. This is the traditional
mind/matter split advocated by the seventeenth century philosophers. There is a famous argument due to
John Searle in 1980 which attempts to rebut the Turing test as a way of assessing consciousness.

In his argument one imagines a non-Chinese speaking person sitting in a room with a long list of rules for
translating strings of Chinese characters into new strings of Chinese characters. When a string of
characters is slipped under the door, the person consults the rules and slips back an appropriate response
under the door. If the incoming strings actually represented questions (like a Turing test), then a
particularly clever and exhaustive set of rules could conceivably allow the person in the room to produce
outgoing strings that furnished answers to the questions.

From the point of view of a person outside, the room would seem to contain an intelligent person who is
responding to the questions. But yet the person in the room has no understanding of the content of these
questions - he or she is merely acting out a set of rules, translating one set of random symbols into another.
In other words while we could possibly program a machine to mock up the effects of intelligence it would
never be truly conscious. While this criticism may be applied to the old style of AI (rule based AI systems
rather similar to Searle's Chinese room exist and have met with some success - they are called expert
systems), it not clear that it truly applies to neural network based AI, since there is no real concept of a set
of rules determining a response. Consciousness is not envisaged as arising out of a machine obeying a set
of rules but as some as yet ill-defined property of the natural functioning of billions of neurons.

Building a Human Computer


Of course, it is entirely possible that the processes involved in brain function are so complex as to make an
effective understanding of them impossible in any practical sense. Then AI is rendered a practical
impossibility. At the other end of the spectrum a few scientists hold that there is nothing all that special
about consciousness and that any machine packed with enough intelligence will automatically acquire
consciousness along the way.

Our investigations seem to indicate that existing neural networks exhibit many promising features. They
do not require sophisticated rule sets to be programmed in to them but function by a dynamic pattern
matching mechanism much closer at least in spirit to the firing activities found in the brain. But at present
the models are somewhat stupid - they can only learn by a slow procedure which requires constant
supervision from an external `teacher'. Their decision making ability is strictly limited. While this has not
stopped them being becoming a very useful tool in many areas, it effectively prevents them from tackling
more complicated problems and ultimately from acquiring any degree of true intelligence.

What is really needed is a way of allowing the internode connections to change with time not according to
some scheme determined by the external teacher but as a response to the node firings activated by input
patterns. After all this is essentially what happens in the brain - the neurons and their connections self-
organize into a structure which, considered as a whole, is capable of very sophisticated functions. The
network must select its own output patterns and connection strengths dynamically. Furthermore, the
association between input and output must be useful - the network must be able to make decisions as a
consequence of its firings. If it is to store memories, it must first be able to see whether a new input is
close to an old memory, or really new. If the latter it must be able to store the new pattern without
destroying the old. It must be able to focus only on certain types of pattern and screen out the rest in order
to perform useful tasks. Ultimately, it must be able to make complex decisions by a succession of
hierarchical pattern association steps.

Rather surprisingly, there are new neural network models (for example the Kohonen network and Steven
Grossberg's 1987 ART network) which attempt with some success to satisfy some of these criteria. These
networks learn by a `competitive' process in which nodes on the hidden layers compete to represent the
input image in such a way that the final representation of the input pattern is localized on a single winning
unit. The way this happens is that when an image is presented to the network, some node on the hidden
layer will respond most strongly to the image. The connections to the this node are then progressively
strengthened in such a way as to increase the node's response to this input pattern whilst the connections
of all the other nodes are adjusted to minimize their response. Thus a given type of image can be made to
excite only one hidden layer neuron. A new image can then be made to activate another node and so on. In
this way different generic features of an input pattern can be handled by different hidden layer nodes.

Furthermore, the hidden layer nodes can also have connections between each other which can be arranged
in such a way that nodes that are strongly connected within the layer respond to similar images. For
example, after such a competitive learning process one hidden layer node might respond to ellipses of a
certain size, whilst one of its `neighbors' (those to which it has a strong intra-layer connection) might
respond to ellipses of the same size but rotated through some angle.

This method of learning requires no `teacher' and is typically much faster than the supervised methods we
discussed below. It also bears some resemblance to the learning mechanisms exhibited by certain types of
neuron. It can also be more powerful in its classification capabilities - to use our old example, it may
capable of spotting a triangle whatever its size, orientation and position in the input pattern plane.

Such models are still in their infancy, however the fields of science on which they are based - neuroscience
and complex systems are advancing very rapidly and it is almost certain that great strides will be made in
the near future in our understanding of artificial neural networks. Whether that increased understanding
will be sufficient to build an intelligent machine - a computer with a mind - is an impossible question to
answer at this time. Very many hurdles will have to be overcome and it is possible that we shall only ever
achieve very crude results. But its a pretty good bet that the next decades will prove to be an exciting time
for the subject of AI and the possibility of a machine mind.

Você também pode gostar