Você está na página 1de 9

What is Artificial Intelligence?

Intelligence is the ability to think, to imagine, to create, memorize, understand, recognize


patterns, make choices, adapt to change and learn from experience. Artificial intelligence is a
human endeavor to create a non-organic machine-based entity, that has all the above abilities of
natural organic intelligence. Hence it is called as 'Artificial Intelligence' (AI).

It is the ultimate challenge for an intelligence, to create an equal, another intelligent being. It is the
ultimate form of art, where the artist's creation, not only inherits the impressions of his thoughts,
but also his ability to think!

How will one recognize artificial intelligence? According to Alan Turing, if you question a human
and an artificially intelligent being and if by their answers, you can't recognize which is the
artificial one, then you have succeeded in creating artificial intelligence. Initial hopes of computer
scientists of creating an artificial intelligence, were dashed hopelessly as they realized how much
they had underrated the human mind's capabilities!

How do you teach a machine to imagine? They realized that they must understand what makes
natural intelligence, the human mind, possible. Only then could they get any near to their goal.

Approaches to AI

Initially, researchers thought that creating an AI would be simply writing programs for each and
every function an intelligence performs! As they went on with this task, they realized that this
approach was too shallow. Even simple functions like face recognition, spacial sense, pattern
recognition and language comprehension were beyond their programming skills!

They understood that to create an AI, they must delve deeper into natural intelligence first. They
tried to understand how cognition, comprehension, decision-making happen in the human mind.
They had to understand what understanding really means! Some went into the study of the brain
and tried to understand how the network of neurons creates the mind.

Thus, researchers branched into different approaches, but they had the same goal of creating
intelligent machines. Let us introduce ourselves to some of the main approaches to artificial
intelligence. They are divided into two main lines of thought, the bottom up and the top down
approach:

Neural Networks: This is the bottom up approach. It basically aims at mimicking the structure
and functioning of the human brain, to create intelligent behavior. Researchers are attempting to
build a silicon-based electronic network that is modeled on the working and form of the human
brain! Our brain is a network of billions of neurons, each connected with the other.

At an individual level, a neuron has very little intelligence, in the sense that it operates by a simple
set of rules, conducting electric signals through its network. However, the combined network of all
these neurons creates intelligent behavior that is unrivaled and unsurpassed. So these
researchers created network of electronic analogues of a neuron, based on Boolean logic.
Memory was recognized to be an electronic signal pattern in a closed neural network.

How the human brain works is, it learns to realize patterns and remembers them. Similarly, the
neural networks developed have the ability to learn patterns and remember. This approach has its
limitations due to the scale and complexity of developing an exact replica of a human brain, as
the neurons number in billions! Currently, through simulation techniques, people create virtual
neural networks. This approach has not been able to achieve the ultimate goal but there is a very
positive progress in the field. The progress in the development of parallel computing will aid it in
the future.
Expert Systems: This is the top down approach. Instead of starting at the base level of neurons,
by taking advantage of the phenomenal computational power of the modern computers, followers
of the expert systems approach are designing intelligent machines that solve problems by
deductive logic. It is like the dialectic approach in philosophy.

This is an intensive approach as opposed to the extensive approach in neural networks. As the
name expert systems suggest, these are machines devoted to solving problems in very specific
niche areas. They have total expertise in a specific domain of human thought. Their tools are like
those of a detective or sleuth. They are programmed to use statistical analysis and data mining to
solve problems. They arrive at a decision through a logical flow developed by answering yes-no
questions.

Chess computers like Fritz and its successors that beat chess grandmaster Kasparov are
examples of expert systems. Chess is known as the drosophila or experimental specimen of
artificial intelligence.

Applications of AI

Artificial Intelligence in the form of expert systems and neural networks have applications in every
field of human endeavor. They combine precision and computational power with pure logic, to
solve problems and reduce error in operation. Already, robot expert systems are taking over
many jobs in industries that are dangerous for or beyond human ability. Some of the applications
divided by domains are as follows:

Heavy Industries and Space: Robotics and cybernetics have taken a leap combined with
artificially intelligent expert systems. An entire manufacturing process is now totally automated,
controlled and maintained by a computer system in car manufacture, machine tool production,
computer chip production and almost every high-tech process. They carry out dangerous tasks
like handling hazardous radioactive materials. Robotic pilots carry out complex maneuvering
techniques of unmanned spacecrafts sent in space. Japan is the leading country in the world in
terms of robotics research and use.

Finance: Banks use intelligent software applications to screen and analyze financial data.
Softwares that can predict trends in the stock market have been created which have been known
to beat humans in predictive power.

Computer Science: Researchers in quest of artificial intelligence have created spin offs like
dynamic programming, object oriented programming, symbolic programming, intelligent storage
management systems and many more such tools. The primary goal of creating an artificial
intelligence still remains a distant dream but people are getting an idea of the ultimate path which
could lead to it.

Aviation: Air lines use expert systems in planes to monitor atmospheric conditions and system
status. The plane can be put on auto pilot once a course is set for the destination.

Weather Forecast: Neural networks are used for predicting weather conditions. Previous data is
fed to a neural network which learns the pattern and uses that knowledge to predict weather
patterns.

Swarm Intelligence: This is an approach to, as well as application of artificial intelligence similar
to a neural network. Here, programmers study how intelligence emerges in natural systems like
swarms of bees even though on an individual level, a bee just follows simple rules. They study
relationships in nature like the prey-predator relationships that give an insight into how
intelligence emerges in a swarm or collection from simple rules at an individual level. They
develop intelligent systems by creating agent programs that mimic the behavior of these natural
systems!
Is artificial Intelligence really possible? Can an intelligence like a human mind surpass itself and
create its own image? The depth and the powers of the human mind are just being tapped. Who
knows, it might be possible, only time can tell! Even if such an intelligence is created, will it share
our sense of morals and justice, will it share our idiosyncrasies? This will be the next step in the
evolution of intelligence. Hope I have succeeded in conveying to you the excitement and
possibilities this subject holds!

Roleof computers
Almost everyone is aware that Information Technology (IT) has played a very significant role in
taking businesses to new heights. Before the advent of computers and relevant technology,
business were totally done using manual resources. As a result, the time taken to complete a task
was more, quality of work was not up to the mark, and the procedures also tended to be more
complicated. However, as computers started to be used in businesses establishments, the
processing of work got more stabilized. Read on, to know more about the use of computers in
business.

What is Corporate Computing

Corporate computing is a concept that is concentrated on the involvement of information


technology in business concerns. If you are a working professional, you surely might have easily
known how much computer technologies are used in businesses. These technologies are used in
almost all sectors such as accounts and payroll management, inventory management and control,
shipping functions, data and database management, financial analysis, software development,
security control and many other essential fields. The end result of corporate computing is
increased productivity and quality. Let us now focus on the use of computers in business world.

Use of Computers in Business World

Following are only a few major fields in business where computing is used largely.

Inventory Control and Management


Inventory control and management is a crucial process, especially in establishments related to
retail and production. Computers are used for recording all aspects of the goods coming in, details
of goods and services, distribution of stock, and storage details. Note, that in small retail and
production firms, simple computer software are generally used. Whereas in large corporations,
Enterprise Resource Planning (ERPs) are employed.

Accounts and Payroll Management


Accounting and payroll management is also believed to be an important part of the overall system
in a company. Be it any kind of industry; computers are largely used for the purpose of managing
accounts of administration, sales, purchases, invoices, and also payroll management, which
includes recording financial details of employees. These are just some components of the
accounts and payroll management system where computing is used.

Database Management
Database management is associated with filing and recording, managing, and retrieval of data
whenever required. For smooth running of businesses, it is very important that they have all
procedures and details stored. This storage of data is done with the help of large databases and
servers which have to be maintained on a regular basis. These information databases and servers
are controlled by computers by appropriate authorities in a company.

Software Development
It can be said that for every computing need, a software has to be used. Software can only be
made using computers for the purpose of helping businesses to combine processes and carry out
their work properly. Nowadays, ERPs are largely used in business to blend all their processes
together and execute the output as expected. There are many other software and application
packages that a business may need to use according to the nature of work.

This is some general information on the use of computers in businesses. There are many other
fields such as security control, communication, research, budgeting and forecasting, web
management, where computers are essential. The impact of information technology on business has
certainly changed the way businesses operate and have coordinated different practices of the
firm to function collectively. Computer use is not only present in businesses, but computers are
even used in sectors like medical and defense.

What is office automation?


The term office automation refers to all tools and methods that are applied to office
activities which make it possible to process written, visual, and sound data in a computer-
aided manner.

Office automation is intended to provide elements which make it possible to simplify,


improve, and automate the organisation of the activities of a company or a group of
people (management of administrative data, synchronisation of meetings, etc.).

Considering that company organizations requires increased communication, today, office


automation is no longer limited to simply capturing handwritten notes. In particular, it
also includes the following activities:

• exchange of information
• management of administrative documents
• handling of numerical data
• meeting planning and management of work schedules

Office suite tools


The term "office suite" refers to all software programs which make it possible to meet
office needs. In particular, an office suite therefore includes the following software
programs:

• word processing
• a spreadsheet
• a presentation tool
• a database
• a scheduler

The main office suites are:


• AppleWorks
• Corel WordPerfect
• IBM/Lotus SmartSuite
• Microsoft Office
• Sun StarOffice
• OpenOffice (freeware)

Electronic Data Processing (EDP)


can refer to the use of automated methods to process commercial data. Typically, this
uses relatively simple, repetitive activities to process large volumes of similar
information. For example: stock updates applied to an inventory, banking transactions
applied to account and customer master files, booking and ticketing transactions to an
airline's reservation system, billing for utility services.

Today
As with other industrial processes commercial IT has moved in all respects from a
bespoke, craft-based industry where the product was tailored to fit the customer; to multi-
use components taken off the shelf to find the best-fit in any situation. Mass-production
has greatly reduced costs and IT is available to the smallest company.

LEO was hardware tailored for a single client. Today, Intel Pentium and compatible chips
are standard and become parts of other components which are combined as needed. One
individual change of note was the freeing of computers and removable storage from
protected, air-filtered environments. Microsoft and IBM at various times have been
influential enough to impose order on IT and the resultant standardizations allowed
specialist software to flourish.

Software is available off the shelf: apart from Microsoft products such as Office, or
Lotus, there are also specialist packages for payroll and personnel management, account
maintenance and customer management, to name a few. These are highly specialized and
intricate components of larger environments, but they rely upon common conventions
and interfaces.

Data storage has also standardized. Relational databases are developed by different
suppliers to common formats and conventions. Common file formats can be shared by
large main-frames and desk-top personal computers, allowing online, real time input and
validation.

In parallel, software development has fragmented. There are still specialist technicians,
but these increasingly use standardized methodologies where outcomes are predictable
and accessible. At the other end of the scale, any office manager can dabble in
spreadsheets or databases and obtain acceptable results (but there are risks).
What is Data Communications?
Next Topic | TOC

The distance over which data moves within a computer may vary from a few thousandths
of an inch, as is the case within a single IC chip, to as much as several feet along the
backplane of the main circuit board. Over such small distances, digital data may be
transmitted as direct, two-level electrical signals over simple copper conductors. Except
for the fastest computers, circuit designers are not very concerned about the shape of the
conductor or the analog characteristics of signal transmission.

Frequently, however, data must be sent beyond the local circuitry that constitutes a
computer. In many cases, the distances involved may be enormous. Unfortunately, as the
distance between the source of a message and its destination increases, accurate
transmission becomes increasingly difficult. This results from the electrical distortion of
signals traveling through long conductors, and from noise added to the signal as it
propagates through a transmission medium. Although some precautions must be taken for
data exchange within a computer, the biggest problems occur when data is transferred to
devices outside the computer's circuitry. In this case, distortion and noise can become so
severe that information is lost.

Data Communications concerns the transmission of digital messages to devices external


to the message source. "External" devices are generally thought of as being independently
powered circuitry that exists beyond the chassis of a computer or other digital message
source. As a rule, the maximum permissible transmission rate of a message is directly
proportional to signal power, and inversely proportional to channel noise. It is the aim of
any communications system to provide the highest possible transmission rate at the
lowest possible power and with the least possible noise.

Communications Channels
Next Topic | Previous Topic | TOC

A communications channel is a pathway over which information can be conveyed. It may


be defined by a physical wire that connects communicating devices, or by a radio, laser,
or other radiated energy source that has no obvious physical presence. Information sent
through a communications channel has a source from which the information originates,
and a destination to which the information is delivered. Although information originates
from a single source, there may be more than one destination, depending upon how many
receive stations are linked to the channel and how much energy the transmitted signal
possesses.

In a digital communications channel, the information is represented by individual data


bits, which may be encapsulated into multibit message units. A byte, which consists of
eight bits, is an example of a message unit that may be conveyed through a digital
communications channel. A collection of bytes may itself be grouped into a frame or
other higher-level message unit. Such multiple levels of encapsulation facilitate the
handling of messages in a complex data communications network.

Any communications channel has a direction associated with it:

The message source is the transmitter, and the destination is the receiver. A channel
whose direction of transmission is unchanging is referred to as a simplex channel. For
example, a radio station is a simplex channel because it always transmits the signal to its
listeners and never allows them to transmit back.

A half-duplex channel is a single physical channel in which the direction may be


reversed. Messages may flow in two directions, but never at the same time, in a half-
duplex system. In a telephone call, one party speaks while the other listens. After a pause,
the other party speaks and the first party listens. Speaking simultaneously results in
garbled sound that cannot be understood.

A full-duplex channel allows simultaneous message exchange in both directions. It really


consists of two simplex channels, a forward channel and a reverse channel, linking the
same points. The transmission rate of the reverse channel may be slower if it is used only
for flow control of the forward channel.

Serial Communications
Next Topic | Previous Topic | TOC

Most digital messages are vastly longer than just a few bits. Because it is neither practical
nor economic to transfer all bits of a long message simultaneously, the message is broken
into smaller parts and transmitted sequentially. Bit-serial transmission conveys a message
one bit at a time through a channel. Each bit represents a part of the message. The
individual bits are then reassembled at the destination to compose the message. In
general, one channel will pass only one bit at a time. Thus, bit-serial transmission is
necessary in data communications if only a single channel is available. Bit-serial
transmission is normally just called serial transmission and is the chosen communications
method in many computer peripherals.

Byte-serial transmission conveys eight bits at a time through eight parallel channels.
Although the raw transfer rate is eight times faster than in bit-serial transmission, eight
channels are needed, and the cost may be as much as eight times higher to transmit the
message. When distances are short, it may nonetheless be both feasible and economic to
use parallel channels in return for high data rates. The popular Centronics printer
interface is a case where byte-serial transmission is used. As another example, it is
common practice to use a 16-bit-wide data bus to transfer data between a microprocessor
and memory chips; this provides the equivalent of 16 parallel channels. On the other
hand, when communicating with a timesharing system over a modem, only a single
channel is available, and bit-serial transmission is required. This figure illustrates these
ideas:

The baud rate refers to the signalling rate at which data is sent through a channel and is
measured in electrical transitions per second. In the EIA232 serial interface standard, one
signal transition, at most, occurs per bit, and the baud rate and bit rate are identical. In
this case, a rate of 9600 baud corresponds to a transfer of 9,600 data bits per second with
a bit period of 104 microseconds (1/9600 sec.). If two electrical transitions were required
for each bit, as is the case in non-return-to-zero coding, then at a rate of 9600 baud, only
4800 bits per second could be conveyed. The channel efficiency is the number of bits of
useful information passed through the channel per second. It does not include framing,
formatting, and error detecting bits that may be added to the information bits before a
message is transmitted, and will always be less than one.
The data rate of a channel is often specified by its bit rate (often thought erroneously to
be the same as baud rate). However, an equivalent measure channel capacity is
bandwidth. In general, the maximum data rate a channel can support is directly
proportional to the channel's bandwidth and inversely proportional to the channel's noise
level.

A communications protocol is an agreed-upon convention that defines the order and


meaning of bits in a serial transmission. It may also specify a procedure for exchanging
messages. A protocol will define how many data bits compose a message unit, the
framing and formatting bits, any error-detecting bits that may be added, and other
information that governs control of the communications hardware. Channel efficiency is
determined by the protocol design rather than by digital hardware considerations. Note
that there is a tradeoff between channel efficiency and reliability - protocols that provide
greater immunity to noise by adding error-detecting and -correcting codes must
necessarily become less efficient.

Você também pode gostar