Você está na página 1de 15

By Joyeeta Sarkar

BRAIN LIKE
COMPUTER
PRAXIS BUSINESS SCHOOL

A Report

Submitted to

Prof. Prithwis Mukherjee

In partial fulfillment of the requirements of the course

BUSINESS INFORMATION SYSTEMS

On 7/11/2010

By

Joyeeta Sarkar

Roll number: B10009

Batch: 2010-12

2
Abstract:

This paper outlines the concept of ‘Brain like computer’ evolves by learning the relationship
between sensory input and behavioral output. The idea is developed by using hypothesis based on the
established features of brain function. After commercializing the new ‘memristor technology’ successfully,
it is the time to think forward, It is the time to build a computer that replicates the more complex working of
a human brain. With electroencephalography technology, it is expected that it would be possible to place
sensors inside the brain and in the long run computer could read our thoughts.
The future success of this study will depend on multidisciplinary collaboration and advances in allied
research areas.

Introduction and why it is so important?

 In 1970, scientists experimented on monkeys, determining that it was possible to control


the firing of neurons in the brain.
 Scientists in 1999 embedded electrodes into the thalamus of cats.
 In the year 2005 experimentally it was implemented on 16 blind patients for controlling
artificial arms.
 In 2008 HP developed ‘memristor technology’.
 In 2009 I.B.M announced a development that could one day lead to a new kind of
computer — that uses specially designed hardware and software to mimic what’s inside
our heads.

Why is it?

The brain is like computer. The brain simply receives information, processes it, and then carries
out the proper function just like computer. The mind has an amazing quality to integrate ambiguous
information across the senses, and it can effortlessly create the categories of time, space, object, and
interrelationship from the sensory data .But there are no computers that can even remotely approach the
remarkable feats the mind performs. There is a need for a new kind of intelligence that can sort through,
prioritize and extract the most important information, much like how the brain deals with sight, sounds,
tastes, touch and smell. Scientists are thinking that they would make a system which has intelligence like
a brain. It would not be invent for single function. With its versatility, robustness and plasticity it would
work just like brain. Also continuously would rearrange its internal state. This new computer could solve
complex problem similar to neurons. Like the human brain- and unlike any existing computer, it would
heal itself if there is a defect. In case of human similar work has been done by our brain. If one neuron
dies, another neuron takes over its function. Brain can work more efficiently than super computer and also
some emotions can’t be created by computer, which is expected to be done by this ‘brain like’ computer.

We assume that after its successful invention we will not require to carry mobile charger for
journey, our personal computer will be booted automatically in proper time by using this new technology.

Technology required for this invention:

Before discussing the new invention I would like to discuss the way of this invention .It has
passed 40 years, Scientists started to think about this topic. Some technology like ‘MEMRISTOR’ has
already been invented and commercialized. To move forward we need to know about this technology.
3
MEMRISTOR TECHNOLOGY

Memristors are basically a fourth class of electrical circuit, joining resistor, capacitor, and
the inductor, that exhibit their unique properties primarily at the nano scale. It is a type of passive circuit
elements that maintain a relationship between the time integrals of current and voltage across a two
terminal element.

HP first demonstrated its Memristor technology back in 2006 and has now announced the
commercial development of the technology through collaboration with memory manufacturer Hynix
Semiconductor. Memristors could eventually replace memory chips and hard drives. The technology
claims to be 100 times as fast as flash storage and use about a 10th of the energy than existing solid-
state memory technologies. That means some gadgets, such as MP3 players, might only need to be
powered up once in their lifetime. Unlike conventional computer memory, which stores data with
electronic on and off switches, Hewlett-Packard’s memristor technology works on the atomic level. As
electrons move across a titanium dioxide memristor chip, they nudge atoms ever so slightly, sometimes
no more than a nanometer. It’s kind of like an atomic switch.

IBM’s COPMPUTER, PERFORMS LIKE CAT BRAIN

4
Supercomputing, can simulate brains up to the complexity of small mammals. But, Scientists
last year used the Blue Gene supercomputer to simulate a mouse's brain, comprising 55m neurons and
some half a trillion synapses. Technology has only recently reached a stage in which structures can be
produced that match the density of neurons and synapses from real brains - around 10 billion in each
square centimeter. IBM is investigating core micro- and macro-circuits of the brain that can be used for a
wide variety of functionalities.

The adaptability of brains lies in their ability to tune synapses. Synapses connect the
neurons. Synaptic connections form, break, and are strengthened or weakened depending on the signals
that pass through them. Making a nano-scale material that can fit that description is one of the major
goals of the project. Scientists are hopeful that this computer could gather together disparate information,
weigh it based on experience, form memory independently and arguably begin to solve problems in a way
that has so far been the preserve of what we call "thinking".

IBM ANNOUNCES ADVANCES TOWARD A COMPUTER THAT WORKS LIKE A HUMAN BRAIN

A team of researchers has built a molecular computer using lessons learned from the human
brain. Modern computers are quite fast, capable of executing trillions of instructions a second, but they
can’t match the intelligent performance of our brain. We can see, recognize, talk and hear someone
walking by in the hallway almost instantaneously, what is a Herculean task for even the fastest computer.
That happens because information processing is done sequentially in digital computers. Once a current
path is established along a circuit, it does not change. By contrast, the electrical impulses that travel
through our brains follow vast, dynamic, evolving networks of neurons that operate collectively.

The researchers made their different kind of computer with DDQ, a hexagonal molecule made of
nitrogen, oxygen, chlorine and carbon that self-assembles in two layers on a gold substrate. The DDQ
molecule can switch among four conducting states – 0, 1, 2 and 3 – unlike the binary switches – 0 and 1 –
used by computers. The researchers have demonstrated an assembly of molecular switches that
simultaneously interact to perform a

5
variety of computational tasks including conventional digital logic, calculating Verona diagrams, and
simulating natural phenomena such as heat diffusion and cancer growth As well as they also
demonstrated a conceptual shift from serial-processing with static architectures. Approximately 300
molecules talk with each other at a time during information processing. It can mimic how neurons behave
in the brain. Through this evolving neuron-like circuit network allows to address many problems on the
same grid, which gives the device intelligence. As a result, the tiny processor can solve problems for
which algorithms on computers are unknown. The molecular processor heals itself if there is a defect.
This property comes from the self-organizing ability of the molecular layer. No existing man-made
computer has this property, but in human if a neuron dies, another neuron takes over its function.

6
Researchers from IBM and the Lawrence Berkeley National Laboratory have developed an
algorithm for mapping the human brain at new levels of detail. Eventually, scientists hope that detailed
knowledge will help them build a computer that replicates the more complex working of a human brain.
Actually they tried to mimic the cat’s brain, but it did not get complete success. For example, it did not
exactly mimic what a real cat does in catching a mouse. But it surpassed earlier efforts that simulated the
much simpler brain structure of a creature the size of a mouse. Researchers used an IBM supercomputer
at the Lawrence Livermore Lab to model the movement of data through a structure with 1 billion neurons
and 10 trillion synapses, which allowed them to see how information “percolates” through a system that’s
comparable to a feline cerebral cortex. A key difference between human brains and traditional computers
is that current computers are designed on a model that differentiates between processing and storing
data, which can lead to a lag in updating information. The brain works on a more complex physical
structure that can integrate and react to a constant stream of sights, sounds and other sensory
information. The data can be very ambiguous. When we see a friend’s face in a crowd, she could be
wearing a red sweater or a blue dress, or her hair could be styled differently, but we’re able to get to the
fundamental essence of the pattern and recognize this is our friend. It is imagined that a cognitive
computer that could analyze a flood of constantly updated data from trading floors, banking institutions
and even real estate markets around the world. The problem is there are a huge number of data, there is
a need for a new kind of intelligence that can sort through, prioritize and extract the most important
information, much like how the brain deals with sight, sounds, tastes, touch and smell.

EEG TECHNOLOGY

A team of University of Maryland researchers developed a technology, which could allow


people with disabilities or paralysis to operate a robotic arm, motorized wheelchair or other prosthetic
device using a headset with scalp sensors that send signals from the brain to the device. They
reconstructed 3-D hand motions from brain signals recorded in a non-invasive way. In time of experiment
researchers placed an order of 34 sensors on the scalps of five participants to record their brains'
electrical activity, using a process called electroencephalography, or EEG. Volunteers were asked to
reach from a center button and touch eight other buttons in random order 10 times, while the scientists
recorded their brain signals and hand motions. Afterward, the researchers attempted to decode the
signals and reconstruct the 3-D hand movements. Researchers have used non-portable and invasive
methods that place sensors inside the brain when reconstructing hand motions. They found that one
sensor in particular provided the most accurate information. The sensor was located over a part of the
brain called the primary sensor motor cortex, a region associated with voluntary movement. Useful
signals were also recorded from another region called the inferior parietal lobule, which is known to help
guide limb movement. From this experiment researchers came to the conclusion that electrical brain
activity acquired from the scalp surface carries enough information to reconstruct continuous,
unconstrained hand movements. By this EEG technology it may eventually be possible for people with
severe neuromuscular disorders, such as amyotrophic lateral sclerosis (ALS), stroke, or spinal cord injury,
to regain control of complex tasks without needing to have electrodes implanted in their brains.

Economic potential of ‘brain like computer’:

7
The world’s first commercially available brain computer interface just arrived at CeBIT
("Centre of Office and Information technology") on 2010. By $12,000 per unit, the Intendix has an easily
usable interface which can be learnt in under 10 minutes of training. To use the Intendix, a cap with EEG
sensors has to be worn by the patient. Then concentrating on a grid of letters that flashes on the screen
the user can type the word he wants. Getting used to the system, the patients will be able to type 1 letter
per second, the seed the interface can manage. Besides typing, it can also trigger alarms, convert text to
speech, print, copy, or email. It costs €9000 (~$12,250). This is the steps towards making it more
common all over the world. Intendix is not commercially very much available. They’ve just entered the
marketing phase where their advertisements don’t actually explain what the product does. Researchers
said it will take time to capture the market. Where EEG can control computers, tag images, or even
command robots, Intendix can simply type. GUGER TECHNOLOGIES has been working on a Second
Life control scheme using the EEG. EEG allows Intendix to quickly pick up which letter are being focused
on in a grid by flashing different rows and columns of letters and measures brain response. EEGs are
limited in their applications. They have great temporal resolution, but spatially they lack the precision
needed to really translate human thoughts into computer actions in a way that exceeds our current
keyboard and mouse system. For this scientists need to know about neocortical columns or even
individual neurons. Researchers get success in that arena already, (both with speech and motor
controls).Now they are thinking better sensing technology that allows precise spatial and temporal
resolution without sticking wires in human head. New Projects are aiming to develop them soon. If it is
applied, controlling the digital world will become much more intuitive and we could simply think
commands to our devices. We’ll also be able to talk with each other through our thoughts to some degree.
The next generation’s Twitter could be broadcasting what we’re thinking. Literally that level of ‘brain like
computer’ technology is fairly far off on the horizon, however. Right Intendix came; the ability to go to a
store, buy a device, and start typing with thoughts is enough to keep people happy for a while. Finally,
what will be the impact on society of animal-like machines? First, family robots may be permanently
connected to wireless family intranets, sharing information with those who want to know where their close
person is. Person may never need to worry if his/her loved ones are alright when he/she is late or far
away, because they will be permanently connected to whom they want. Crime may get difficult if all family
homes are full of half-aware, loyal family machines. In the future, we may never be entirely alone, and if
the controls are in the hands of our loved ones rather than the state, that may not be such a bad thing.
Slightly further ahead, if some of the intelligence of the horse can be put back into the automobile,
thousands of lives could be saved, as cars become nervous of their drunk owners, and refuse to get into
positions where they would crash at high speed. We may look back in amazement at the carnage
tolerated in this age, when every western country had road deaths equivalent to a long, slow-burning war.
In the future, drunks will be able to use cars, which will take them home like loyal horses. And not just
drunks, but children, the old and infirm, the blind, all will be empowered. Eventually, if cars were all
(wireless) networked, and humans stopped driving altogether, we might scrap the vast amount of clutter
all over our road system - signposts, markings, traffic lights, roundabouts, central reservations - and
return our roads to a soft, sparse, eighteenth-century look. All the information - negotiation with other
cars, traffic and route updates - would come over the network invisibly. And our towns and countryside
would look so much sparser and more peaceful.

Current Players Working On “Brain like Computer”:

IBM is working on a project to mimic the human brain. Company has teamed up with five
universities to simulate and emulate the brain's abilities for sensation, perception, action, interaction and
cognition. The $4.9 million project, funded by the Defense Advanced Research Projects Agency
(DARPA), uses nanoscale devices for synapses and neurons to make the computer draw as much
8
energy as the human brain. IBM Fellow and vice president of IBM's Alma den Research Center in San
Jose said "We believe that our cognitive computing initiative will help shape the future of computing in a
significant way, bringing to bear new technologies that we haven't even begun to imagine.” The initiative
underscores IBM's capabilities in bold, exploratory research and interest in powerful collaborations to
understand the way the world works. IBM team recently managed to demonstrate a near-time simulation
of a small mammal brain using cognitive computing algorithms with the power of IBM's Blue Gene
supercomputer. It is hoped that this experiment will pave the way to come up with mathematical
hypotheses of brain function and structure as they work toward discovering the brain's core computational
micro and macro circuits. It is hoped that the results of the project will enable large scale roll-outs of
intelligent computers that could deal with problems in much the same way as a human would and
hopefully not lead to a scenario, where an Arnold Schwarzenegger-like robot comes back from the future0
to assassinate the mother of the human resistance against their cybernetic masters.

Key Player (IBM):

IBM, who has been granted $4.9m (£3.27m) from US defense agency Darpa for researching
this project, is the second largest (by market capitalization) technology company.

• International Business Machines (IBM) was founded in 1896 as the Tabulating Machine Company
by Herman Hollerith, in New York.

• IBM is an American multinational computer, technology and IT consulting corporation


headquartered in Armonk, New York, United States.

• It is the world's fourth largest technology company and the second most valuable global brand
(after Coca-Cola).

• IBM is one of the few information technology companies with a continuous history dating back to
the 19th century.

• It manufactures and sells computer hardware and software and offers infrastructure services,
hosting services, and consulting services in areas ranging from mainframe computers to nanotechnology.

• With almost 400,000 employees worldwide, IBM is the second most profitable information
technology and services employer in the world (according to the Forbes list) with sales of greater than
100 billion US dollars.

• IBM holds more patents than any other U.S. based Technology Company and has eight research
laboratories worldwide.

• The company has scientists, engineers, consultants, and sales professionals in over 200
countries. IBM employees have earned five Nobel Prizes, four Turing Awards, nine National Medals of
Technology, and five National Medals of Science. As a chip maker, it has been among the Worldwide Top
20 Semiconductor Sales Leaders in past years.

But I feel another really interesting thing to look at is how large their workforce is. Just as with
revenues and profits, these numbers can be quite surprising (and impressive).

9
Same group of 15 well-known tech companies: Adobe, Amazon, Apple, Baidu, Cisco, Dell, eBay,
Google, HP, IBM, Intel, Microsoft, Oracle, Sun and Yahoo has been taken.

From this it is observed that IBM has almost 400k employees!

To put the size of the IBM workforce in perspective: IBM has more employees than Microsoft, Intel, Dell,
Cisco,

• IBM has almost 20 times as many employees as Google (or Amazon).

Biggest Challenge:

Invasive and non-invasive brain machine interface research is a fast growing field, but a series
of important challenges will have to be met to bring.

 The first challenge is in the realm of socio-economics. It is essential to have a worldwide network
of collaborations and information exchange between all disciplines, including allocation of much
larger resources for the task. Mankind needs to learn how to combine the natural tendencies of
individuals for personal achievements with the fact that we humans are social animals that made
the best by synergistic social interactions and associations to larger teams, tribes and nations.
The challenge is to create a worldwide feeling of a united mission.
 Second challenge is to understand the complexity of brain. Scientists observed that they could
not make advancement in their understanding of the brain before understanding its intentions
from reading its electrical activity. At present they are thinking that using the brain they cannot
understand the brain. Some of them are still thinking that the brain is too complex to understand.
Now scientists have lots of data about the brain, but no single person knows it all. They have
different ideas and theories, but no true testable global theory about the brain. Without such
theory, they can only keep “measuring things,” like recordings of the EEG; Here, actually they try
to understand a supercomputer with millions and billions of interacting integrated circuits, by
recording currents from a very small sample of these circuits, not even knowing the accurate
connectivity, while measuring devices shortcut other parts.

To overcome these challenges


10
First: a good theory required which can clearly clarify that how brain works. If one wishes to reveal,
for example, if the brain intends to move an arm, at the very least one must predict the brain activity
expected for each movement. For more general predictions, there is a need of deeper and more global
theory.

Second: Data acquisition and interpretation: To better listen to the brain, need good ears and better
system that know how to listen. The first steps in theories, regarding the principles of brain function that
are most relevant to neuroprosthetics, come from computational sensor motor control (Kawato, 1999;
Shadmehr and Krakauer, 2008; Todorov, 2004; Wolpert and Ghahramani 2000). In a recent review,
Lalazar and Vaadia presented the wider view suggesting that all brain functions are not based on a serial
machine that reads sensory inputs and respond to them; rather, the brain is a memory based prediction
machine in which experiences of relations between actions and their results build in the brain internal
models. In the case of sensor motor associations, these models predict the expected sensory inputs and
the results of its own actions, and bring about perception. In the words of Noe (giving the example of
visual perception) “The experience of seeing occurs when the organism masters what we call the
governing laws of sensor motor contingency”. This is a debatable approach that can still be adopted when
scientists try to construct a machine that interprets brain activity. Naturally, the challenge is to test such
theory and pursue other theories.

 Third challenge is our relatively poor ability to extract the relevant information from the monitored
brain activity. At present researchers use various methods to monitor brain activity at different
levels, from highly invasive to non-invasive ones. The activity, in all cases, provides only partial
and “noisy” information about the subject's intentions. Moreover, the activity changes
continuously, either due to technical problems such as unstable recordings or due to the inherent
adaptive nature of the brain itself, which modifies its activity to the subject's experience.
Furthermore, the coding scheme by which the brain actually uses to encode information is still
highly debated. Approaches to address this challenge are demonstrated in several publications of
recent year’s .While facing this challenge; one has to keep in mind the dynamics of behavior and
the predictive nature of the sensor motor internal models. Consequently, scientists learn the
relevant dynamics of neural processing. It is therefore essential to improve the understanding of
the adaptive nature of the brain. Interestingly, it is found that this may be an easier task than
scientists might think, since the brain is quite good at this task. Cortical maps are highly dynamic,
even at adulthood, and firing rates of single cells as well as neuronal interactions modify quite
rapidly during sensor motor learning.
To facilitate the development of brain-driven artificial devices that produce natural-like
movements, this line of studies should be continued, with special emphasis on developing optimal
learning schemes, adapted to the constraints of human motor learning and performance under
variable conditions and using classical and instrumental conditioning to teach the brain how to
interact with the machine. One line of research in Vaadia lab uses the theory of sensor motor
predictive loops and implements it in showing that ‘Brain like computer’ is dramatically improved
by adopting this principle. The algorithm is not only monitoring the brain activity, but also adapting
continuously in the background to its changes, while it controls behavior at the same time. Using
this principle, monkeys and machines learned to “work together” in tens of seconds even when
the model is started from scratch during every recording day. Thus, these results suggest that
even totally paralyzed patients will be able to train themselves (in 1–2 min) and even if the brain
activity changes from minute to minute and day to day. The idea of adaptation also serves the
basic concept of bio-feedback which is the basis for the use of neuro feedback in animals and
humans. It has already proven successful in human subjects when used to train people to change
11
a particular brain activity through feedback and reward (instrumental learning). For both types of
strategies, some proof-of-principle demonstrations of their clinical effectiveness exist but lack
larger controlled trials. Neuro feedback of slow cortical potentials and sensor motor EEG-rhythm
(SMR) has produced improvements of attention and school performance in children with attention
deficit disorder and hyperactivity (ADHD) compared to different control conditions such as
placebo training and stimulant medication. In drug resistant focal epilepsy, not only were
substantial reduction in seizures reported, but also large gains in IQ and cognitive functioning
were also demonstrated. After training of slow cortical potential control, it is indicating that
neurofeedback is a promising tool to improve cognitive functioning in some brain disorders.

The situation is similar in clinical brain-computer-interface research. Animal experiments using


implanted microelectrodes in non-human primates have demonstrated perfect brain control of artificial
hands or paralyzed limbs from ensembles of firing neurons in motor cortices after training, only one study
with eight chronic stroke patients without residual movement capacity using a non-invasive
magnetoencephalographically controlled prosthetic hand ‘brain like computer’ technology, which is
available. Most patients were able to open and close their paralyzed hands fixed on the orthosis with
sensor motor oscillations from their motor cortices. EEG or MEG have to be combined with intelligent
peripheries and robots; in motor control only four-dimensions of control are possible (i.e., right-left, front-
back). Even with sophisticated algorithms, EEG cannot provide better classification solutions due to its
biophysical limitations. Verbal communication with completely paralyzed locked-in patients mostly
suffering from amyotrophic lateral sclerosis (ALS) with non-invasive technology using different brain
signals from the EEG for selecting letters or “yes” and “no” answers in a computer menu was described in
several reports. However, in completely-locked-in patients without any remaining eye-movement control,
this ‘brain like computer’ was not successful.

In future for direct brain communication this technology should rely on strategies requiring no or
minimal cognitive-attention effort and use mainly implicit learning. Locked-in, vegetative state (VS) and
advanced Alzheimer patients should be trained to produce reflexive or automatic brain responses to
questions or cues which can then be used as affirmative answers or rejections. Implantation of electrodes
epidural will improve signal to noise ratio and help the patient to regulate his/her Electrocardiogram.

Classical conditioning of brain potentials and oscillations using clearly differentiable conditioned and
unconditioned auditory or somatosensory stimuli (vision is often compromised) may overcome the
problem of voluntary, effortful conscious processing that is not possible in these patient groups. In chronic
stroke, spinal cord injury and other forms of motor paralysis, a recent demonstration in reversibly
paralyzed monkeys should be translated into human application. Here, the monkey was trained to
produce spike sequences with operant conditioning from a few cells in the motor cortex to activate
functional electric stimulation (FES) electrodes fixed to the paralyzed fingers. Invasive BMIs using
implanted micro-or macro electrodes in human patients need to be tested experimentally as tested with
non-invasive EEG/MEG, near-infrared-spectroscopy and magnetic resonance imaging BCIs. Most
paralyzed patients refuse neurosurgical procedures as they are too risky; even if less flexible and error-
prone, non-invasive measures will complement invasive BCIs. In neuro feedback the situation is less
complicated because some first controlled demonstrations are already available and only large controlled
trials are missing.

Yet, there is a lot to improve in ability to read brain activity. The noninvasive studies, thus far suffer
several problems; at present some have low spatial resolution (including EEG and FMRI) and low
temporal resolution (FMRI). In addition many of the devices are not always practical for daily use.

12
Likewise, invasive technologies are not too useful for clinical applications at the current state. At present
one can implant micro-arrays of many electrodes, but most are still damaging the tissue to some extent
and do not last for many years. One example of efforts in the right direction is described by Kennedy and
colleagues who developed Neurotrophic electrodes (2008). Telemetry techniques are still limited and do
not allow transmission of full wave signals or even just the action potentials from hundreds of electrodes
at sufficient speed. Yet, an example of the right steps were recently made by developing a 96-channel
implantable data acquisition system that performs spike detection and extraction and wirelessly transmits
data to an external unit (2009). It may argue that we are almost there. Yet, most scientists believe that it
takes better technologies to get to the desired devices that will provide samples of large number of single
neurons using telemetry and stable recordings, for many years and with no damage to the brain tissue.
One important light at the end of the tunnel may be provided in the future by the subfield of
nanotechnologies, which will develop nano-detectors which may be implanted inertly in the brain and
measure local electrical activity. When that day comes, we will able to implant thousands of inert
detectors that can transmit the compressed version of the information outside of the brain – it will
represent a significant revolution in the field of ‘brain like computer’. Likewise, the technological challenge
for using noninvasive techniques involves increasing of spatial and temporal resolution and
miniaturization of the devices. This poses engineering challenges that may look sometime trivial, yet
important like for example, miniaturization of power supply to the some electronic components that must
be implanted. These developments will be used not only for motor prosthesis but also other treatments
like closed loop deep brain stimulation. This must be improved to include recording of brain activity, which
would allow for dynamic, adaptive stimulation that will condition the brain activity to restore normal activity
when it goes astray (like in Parkinson's disease). Finally, while all these smart detectors and algorithms
will be interfaced to the brain on one side, one can't forget or neglect the other side: interfacing the output
of these devices to effectors. The ultimate solution in neuroprosthetics will be control of the natural limb;
an intermediate solution may range from controlling a computer or robotic devices (arm, wheel-chair,
hand etc).

Furthermore, the long-term challenge may bring these field too much broader clinical applications, in
improvements of not only paralysis but also other brain functions. This would include cognitive function
and psychiatric conditions, like psychopath, obsessive compulsive disorder, depression and
schizophrenia that have been already attempted using deep brain stimulation and behavioral treatments,
but require extensive work to achieve fine-tuned, closed-loop recording-stimulation that will allow
conditioning of brain activity to switch from the disease patterns of electrical activity to normal patterns.
First reports of operant conditioning of sub cortical and cortical nuclei with real-time functional magnetic
resonance imaging this is promising. Bearing in mind the theory that the brain deals with coordinating its
internal models with the incoming inputs and results of its actions, it is clear that these tasks are not
impossible. Expansion of clinical use will bring about serious ethical issues, which will pose yet another
grand challenge to mankind, and scientists must push this ethical challenge aside. So over the next thirty
years is will see new types of animal-inspired machines that are more `messy' and unpredictable than any
we have seen before. These machines will change over time as a result of their interactions with us and
with the world. These silent, pre-linguistic, animal-like machines will be nothing like humans but they will
gradually come to seem like a strange sort of animal. Machines that learn, familiar to researchers in labs
for many years, will finally become main stream and enter the public consciousness. The kinds of
problems we are going to see are somewhat noise and error resistant and that do not demand abstract
reasoning. A special focus will be behavior that is easier to learn than to articulate - most of us know how
to walk but we couldn't possibly tell anyone how we do it. Similarly with grasping objects and other such
skills these things involve building neural networks, filling in state-spaces and so on, and cannot be

13
captured as a set of rules that we speak in language. We, people are experienced the dynamics of our
body in infancy and thrash about until the changing internal numbers and weights start to converge on the
correct behavior. Different bodies mean different dynamics. And robots that can learn to walk can learn
other sensor motor skills that we can neither articulate nor perform ourselves. For example, there are
already autonomous lawnmowers that will wander around gardens all afternoon. The next step might be
autonomous vacuum cleaners inside the house (though clutter and stairs present immediate problems for
wheeled robots). These are all sorts of other uses for artificial animals in areas where people find jobs
dangerous or tedious - land-mine clearance, toxic waste clearance, farming, mining, demolition, finding
objects and robotic exploration, for example. Any jobs done currently or traditionally by animals would be
a focus. We are familiar already from the Mars Pathfinder and other examples that we can send
autonomous robots not only to inhospitable places, but also send them there on cheap one-way `suicide'
missions. (Of course, no machine ever `dies', since we can restore its mind in a new body on earth after
the mission.) Whether these types of machines may have a future in the home is an interesting question.
If it ever happens, it will be because the robot is treated as a kind of pet, so that a machine roaming the
house is regarded as cute rather than creepy. Machines that learn tend to develop an individual,
unrepeatable character which humans can find quite attractive. There are already a few games in
software - such as the Windows-based game Creatures, and the little Tamagotchi toys - whose
personality’s people can get much attached to. A major part of the appeal is the unique, fragile and
unrepeatable nature of the software beings you interact with. If Creature dies, it may never be possible to
raise another one like it again. Machines in the future will be similar, and the family robot will after a few
years be, like a pet, literally irreplaceable.

There are many things that could hold up progress but hardware is the one that is staring us in the
face at the moment. Nobody is going to buy a robotic vacuum cleaner that costs 5000 no matter how
many big cute eyes are painted on it. Many conceptual breakthroughs will be needed to create artificial
animals. The major theoretical issue to be solved is probably representation: what is language and how
do we classify the world. We say `That's a table' and so on for different objects, but what does an insect
do, what is going on in an insect's head when it distinguishes objects in the world, what information is
being passed around inside, what kind of data structures are they using. Each robot will have to learn an
internal language customized for its sensor motor system and the particular environmental niche in which
it finds itself. It will have to learn this internal language on its own, since any representations are
attempted to impose on it, coming from a different sensor motor world, will probably not work.

Conclusion:

I've been trying to give an idea of how artificial brain could be useful. But this is not going to
happen in our lifetime. In the coming decades, we shouldn't expect that the human race will become
extinct and be replaced by robots. We can expect that classical ‘brain like computer’ will go on producing
more and more sophisticated applications in restricted domains - expert systems, chess programs,
Internet agents. At vulnerable points these will continue to be exposed as `blind automata'. Whereas
animal-based AI will go on producing stranger and stranger machines, less rationally intelligent but more
rounded and whole, in which we will start to feel that there is somebody at home, in a strange animal kind
of way. In conclusion, we won't see full ‘brain like computer’ in our lifetime, but in the long run it could
come in reality.

References:

Wikipedia.
Tech news daily.

14
BBC news.
Xbit laboratories.
Pop matters.com.
BoyCottMag.com.
Neowin Forums.
Ibm.com.

15

Você também pode gostar