Você está na página 1de 10

Introduction

Imagine a world where everything is automated. In this world, intelligent machines

control everything from the news people see, to the places they go. Artificial intelligence has

become so deeply embedded in the systems of this world that very few even notice it. Almost

every element of life is impacted by automation and Artificial intelligence. Learning algorithms

are used to further the advancement of education, medicine, science, and society in general.

Does this sound familiar? Its understandable if it does, because this is our world.

If youve ever used Google, Facebook, Twitter, Youtube, or millions of other online

platforms, youve interacted with artificial intelligence. Even if youve never used the internet,

systems like the census are analyzed using artificial intelligence. Artificial intelligence, or AI, is

a set of mathematical algorithms that return outputs when given a set of numbers. As they

receive input, they self-adjust, or learn.

When most people think about artificial intelligence or intelligent machines, they think of

Terminator-esque characters, when they really should be thinking about data analysis and

processing. Thankfully, the kind of AI generally portrayed on television is quite different from

real AI.

However, artificial intelligence will present some risks. The real risk is not that artificial

intelligence will turn on us, but the shockwaves that occur in society when any drastic change in

technology occurs.

Historical Context
Our modern systems for AI are quite different from the ones portrayed in film and television.

Modern AI is easy to control. It is generally constrained to a very specific set of tasks and is

unable to perform outside of its intended workspace. For instance, an image recognition system

isnt really able to do anything other than recognize images. Learning algorithms generally work

by processing inputs and returning a set of outputs. However, a non-learning program is required

to generate or collect inputs, and respond to outputs. Its like the human brain, without a body,

the brain cant survive or communicate. Only through the nervous system can the brain receive

and respond to inputs. AI functions much in the same way. If an algorithm were to decide to

harm humans, it would only be able to damage as much as it is allowed to modify. For instance,

text-to-speech program would be unable to do any harm other than generate the incorrect text for

voice samples.

It is worth noting that our current systems for artificial intelligence are far less

sophisticated than what is often portrayed in science fiction. Though our systems are quickly

improving, modern AI is most likely unable to become malicious. Current systems simply dont

have the power to produce that kind of depth. Due to the nature of how AI works, it is possible

for algorithms to damage their systems if outputs are not properly validated or training data is

intentionally polluted. An infamous example of this is Microsofts Tay. Tay was a Twitter

account that was able to converse with other users on Twitter. Tay learned from each tweet it

was sent. After a group of people decided to flood it with offensive messages, Tay began to use

racist and anti-semitic terms before it was forcefully shut down by Microsoft. (Peter).

Even with these possibilities in mind, it is extremely unlikely that AI will pose any major

threat to humans, unless it is used in an intentionally malicious manner. For instance, military AI

1
could lead to more advanced and effective tools for war, or hacking tools could be improved to

find more efficient ways to gain unauthorized access to systems. As with all tools, AI can be

used as a more effective means to achieve both positive and negative ends.

Research and Analysis

AI becoming malevolent is so improbable that its barely worth discussing at this point.

There are far more likely problems AI will present. AI has the potential to demolish our current

economic and educational systems. A recent study suggested a large loss of manufacturing jobs:

"In 1998, the inflation-adjusted output per worker was much lower than it is today. This

is due to a variety of factors, chief among them being the automation and information

technology advances absorbed by these sectors over this time period. The higher output

per worker has meant firms could lower their price for goods.

Almost 88 percent of job losses in manufacturing in recent years can be attributable to

productivity growth(Hicks and Devaraj 6).

Through increased automation of the manufacturing industry, a massive amount of jobs have

been displaced. Between 2000 and 2010, over 5 million jobs were lost in the manufacturing

sector. (Hicks and Devaraj 5) AI will further advance the ability of automation and likely

displace more jobs.

However, thats not necessarily a bad thing. Though our systems are shifting,

advancements in AI will also improve the quality of life for everyone through a wide variety of

factors. Everything from transportation to tech support will become easier and more convenient.

2
Unfortunately, new technology is often faced with opposition. For instance, trains

provide useful and convenient public transportation, but were originally feared by many people.

As Brad Allenby noted,

In the early days of railroads, . there was a widespread belief that traveling at

the heretofore unimaginable speed of 25 miles per hour would kill the passengers, in part

because such technology was against the obvious will of God. (4)

In our society, that kind of belief would be considered ridiculous. However, we have a similar

fear towards artificial intelligence. Any discussion of even the most benign usage of AI is met

with uncertainty and opposition. The fear that something as simple as a virtual assistant or a

speech-to-text program could have an ulterior motive is simply preposterous. More importantly,

this is not the right debate to be having. Instead of worrying about ungrounded claims of evil

superintelligence, we should be working to adjust our systems to accommodate for the rapid

changes in science and technology.

Stephen Hawking stated that, The development of full artificial intelligence could spell

the end of the human race (qtd. in Cellan-Jones 1). The concept of full artificial intelligence is

an interesting and important one. Current artificial intelligence is simplistic and pales in

comparison to portrayal in movies, One of the more popular algorithms, the neural network, runs

on a shifting set of neurons and synapses, much like the human brain. However, neural networks

only have a few hundred neurons at most, versus the billions of neurons the human brain

has.What I believe Dr. Hawking meant by full AI is artificial intelligence that is able to

3
perform as large a set of tasks as humans with equal or improved accuracy. Unfortunately we

have not even begun to approach that point. For AI to become full enough to turn against us,

there are several conditions that must be met.

First, we must develop the technology. We would need significant advances in AI to

achieve anything that could conceptualize behavior outside of its intended function or cause

damage that is not the result of random misfortune. To develop the generalized, effective AI,

needed for malicious intent to be concerning, we would need far more powerful computers and

more efficient algorithms. In fact, the concept of intention is far out of reach from current

technology. Its difficult to say that AI doesnt think, as our definition of thinking is limited to

our own experiences and traditional metacognitive beliefs. Though the processes used for

modern AI are similar in structure to the decision making of biological beings, they are not

nearly as effective.

In addition, we would need to develop this technology without safeguards or kill switches. Nick

Bostrom theorized that,

An unfriendly AI of sufficient intelligence [may realise] that its[sic] unfriendly

goals will be best realized if it behaves in a friendly manner initially, so that it will be let

out of the box. It will only start behaving in a way that reveals its unfriendly nature when

it no longer matters whether we find out; that is, when the AI is strong enough that

human opposition is ineffectual. (qtd. in Danaher 6)

4
However, this leans upon the assumption that we are completely trusting of this AI. Through

simple checks on output, which are standard in programming, even superintelligent AI could be

controlled. For instance, requiring human confirmation before an autonomous drone fires

weapons would prevent it from firing on allies or unintended targets. These checks might be

similar, though far more extensive to Isaac Asimovs famous Laws of Robotics, an indisputable

set of rules that all robots in Asimovs fictional books must follow. Unlike the robots portrayed

in Asimovs work, laws would be implemented as static checks in code, versus taught to AI

and left to interpretation.

In his paper, John Danaher noted that the possibility of AI fooling humans should be

taken seriously, but also that it should not be allowed to paralyze the progression of AI research.

This is essential to remember. If the progression towards full AI is to proceed much further, there

must be a constant balance between considering safety and pushing our society forwards.

The final element required for malevolent AI is a larger domain of control. Imagine a

prisoner with a cell phone. They can be told of the outside world, and can give advice to the

caller, but cannot interact with the outside world without the proxy of another person. AI is very

similar to this. It can be fed inputs, and will return outputs, but cannot modify the world by itself.

Its outputs could only be dangerous where they are implemented in other systems. AI is

generally used in a very restricted domain, such as voice recognition or image labeling. The

state-specificity of AI creates a restrictive effect, vastly reducing the damage that could occur.

5
Many experts are concerned with the possible economic impacts of advancements in

artificial intelligence. In a study by the Pew Research Center, it was found that,

Half of these [technology] experts (48%) envision a future in which robots and

digital agents have displaced significant numbers of both blue- and white-collar

workerswith many expressing concern that this will lead to vast increases in income

inequality, masses of people who are effectively unemployable, and breakdowns in the

social order. (Smith, Aaron, Anderson, 3)

These concerns have strong backing and should not be taken lightly. The future of jobs is

uncertain with all new technologies, but the speed at which AI and automation have begun to

advance upon the job market is worrying. Jobs that are simple and repetitive, such as assembly

line jobs, are easily replaced by robots, which are likely far more efficient. Even workers in more

abstract jobs, such as stock trading or technical support can be supplemented or replaced by AI.

However, it is not the responsibility of the innovators in AI to slow themselves down so

our systems can catch up. Our society needs to be able to adapt around new technology as it is

created. In the free market, it isnt really possible for innovation to be slowed. Due to rapid

development of technology in places across the world, competition will mandate the

advancement of AI. If we cant slow innovation, traditional systems must adapt. As Nils Nilsson

put it, Retraining [workers] is critically important (9)

Regardless of the negative impact it could have, AI will continue to be a boon to society.

From automation to search engines, AI has provided billions with useful products and services.

6
The ability for a machine to learn about its environment is an extremely powerful and helpful

tool.

Why is AI portrayed in such a negative way? Until recently, AI did not exist outside of

the world of science fiction. Its only been in the past few decades that artificial intelligence has

become popularized. Sci-fi films have often portrayed killer robots. In fact, the term robot is

generally attributed to Karel apeks play, Rossum's Universal Robots, where robots rebel

against their masters and destroy the human race.

The root of fear towards AI is in the rapid development of something so impactful.

Returning to the example of steam engines, you can see some distinct similarities. People feared

trains for their ability to move at high speeds and potential health risks. However, the bigger

impact trains had was the revolutionizing of transportation. The steam locomotive was the

beginning of a long chain of societal changes towards faster, more convenient technology. This

has changed the way our world works in ways we never couldve imagined. Perhaps AI will

begin a new age. Nilsson believes that in the future:

There will be a disincentive to work long hours, and everyone will be largely

unemployed. In either case we have unemployment. By unemployed I do not mean

unoccupied. Nor do I mean to imply that people will regard their unemployment as in any

way undesirable I merely mean that peoples time will not be spent predominantly

working for an income. Income will come from other sources (6)

7
Conclusions

Artificial intelligence is just a subset of mathematical systems, not some malevolent

force. We use AI as a powerful tool on a daily basis, from autocomplete to better search and

rescue, AI has been a force of positive change with no indication, or even possibility that it will

become evil. More often than not, AI has helped us with tasks too tedious or difficult for

humans. Even once more effective AI has been developed, it will probably be less likely to

betray us than any normal person due to artificial restrictions imposed upon it.

The socioeconomic repercussions from new technology are always uncertain. With

improvements in machines and algorithms every second, our societal systems are being stressed

close to the point of breaking. Without massive, rapid change within our government, our

educational and legal systems will fall apart. In fact, they have already begun to. In the

information age, many laws are difficult to enforce or simply dont exist to cope with the

ever-evolving wild west of the online world. Our schools are finally beginning to teach

programming, decades after it became relevant. These two critical pillars of society must be

maintained, and need to begin adjusting themselves to keep up with the development of

technology. Progress will not slow down to accommodate us.

Further research needs to be done on how these systems can be adjusted to prepare for the

further development of technologies such as AI. It seems reasonable that improving education

for technological literacy could help us step towards the improvement of other systems. This

8
would have the added bonus of preparing students for the modern workforce, where skills such

as critical thinking and technical literacy are more relevant than the current focuses of education.

We live in a world of science fiction. Even ten years ago, predicting the advancements in

technology that have occurred would be difficult or even impossible. The future holds many

surprises, and we need to be prepared for anything. Our government needs to shift from a rigid

set of rules in response to scenarios to an ever-changing, dynamic system. Technology will

continue to evolve. Much like the algorithms that will shape it, our society must evolve to be a

fluid, learning, system.

Você também pode gostar