Você está na página 1de 6

Dexter Ketchum

Professor Christina Giarrusso


October 20th, 2016
Artificial Intelligence: Coding Our Future
Anything that could give rise to smarter-than-human intelligence - in the form of Artificial
Intelligence, brain-computer interfaces, or neuroscience-based human intelligence enhancement
- wins hands down beyond contest as doing the most to change the world. Nothing else is even in
the same league. Eliezer Yudkowsky
Artificial intelligence is a rapidly growing sector of computer science which entails
creating software that thinks critically to solve problems in the same manner humans do. With
continued advancements in the field, numerous artificial intelligence programs, such as Google
Assistant or OpenAI, have learned how to break down complex human language and process
even complicated requests. These advancements open hundreds of doors into the future of
technology, but perhaps the most important has yet to be explored: the infusion of artificially
intelligent agents into code compilers.
Where programming often requires convoluted syntax, AI could serve as a way to bridge
the gap, allowing programmers to type their instructions into plain English before they are
converted by the AI system within. The best way to articulate this concept is through the usage
of an example. The following snippet of C++ code will generate a random number between 1
and 10:
#include <iostream>
#include <cstdlib>
using namespace std;
int GenerateRandomNumber() {
int random_number = 0;

srand(time(NULL));
random_number = rand() % 10 + 1;
return random_number;
}
In contrast, a system that is developed with an artificially intelligent language-to-code
converter would allow for instructions more akin to this:
Generate a random number between 1 and 10. Save this
number in the variable random_number.
Although being able to write code in this manner would be delightful, there are
unfortunately a number of limitations in current artificially intelligent systems which limit their
ability to understand complex human language. However, there is a discipline dedicated to
making

coding more representative of traditional languages called natural-language

programming. According to a study by Dr. Richard A. Frost on the subject, there are three major
interpretation difficulties that must be overcome by current AI technology to allow for the
optimal natural language coding environment. First, the computer must be able to break down
reduplicated expressions, akin to a state within a state or a church within a church. Second, it
must be able to evaluate multiple-agreement statements, such as John, Timothy, and Bob were
employed as a carpenter, fireman, and farmer. Finally, it must be able to interpret cross-serial
dependencies which are common in some European languages, like we help Hans paint the
house. (2006, 6-7)
To overcome these obstacles, various researchers have recommended creating software
architecture that is organized in a similar pattern to the human brain. One such researcher,
Rosemarie Velik, describes the brain-like system as being a way to overcome issues in logical
understanding, particularly by allowing computers to analyze the given situation in a dynamic

manner. Presently, computers have difficulty understanding convoluted requests in unstructured


contexts, while they have stellar performance when it comes to direct logic questions in
structured environments. She claims that a brain architecture would alleviate these limitations,
particularly because humans excel at dealing with convoluted questions, even in disorganized
contexts. Velik offers two primary examples to explain disorganized and organized task
environments: an industrial sorting task and a safety and surveillance task. Where industrial
sorting is monotonous, straight-forward, and specific, a computer-controlled system excels.
However, in the case of a security system checking multiple cameras, there is a broad range of
assignments that are laid out in disorganized screens. As of yet, there has not been a computer
system designed that can efficiently monitor security systems and categorize Human behaviors
into security threat levels. (2013, 26-28)
These challenges are uniquely applicable to the idea of an artificially intelligent coding
language, particularly due to the need of these systems to dynamically understand human
languages and, more importantly, accurately produce code that reflects the words written on the
screen. Since coding languages allow for a variety of different paths to achieve similar goals, the
computer must be able to understand the flow of logic that the user is dictating. Where one user
may seek to use switch statements, for example, another might be more include to use if, then
control patterns. These miniscule differences would have to be accounted for and calibrated by
the program on-the-fly, which is another issue that must be circumvented before merging these
two technologies together.
With that being said, while the first two studies focused primarily on the challenges
involved in further expanding artificial intelligence, other researchers have taken it upon

themselves to explore the negative implications of expanding intelligent systems. In the paper by
Thomas G. Dietterich, for example, he and his colleagues explore the dangers of artificial
intelligence. Perhaps the most prominent example given, which has been discussed in numerous
other academic papers, is the possibility of exponential intellectual growth. In laymans terms,
this refers to an artificially intelligent system that is able to learn enough about itself and the
world around it to then add installments onto its own system. Once a piece of software learns
how to build onto itself, this allows for the computer to infinitely learn and expand until it dwarfs
the human intellect. (2015, 38-40)
This sort of situation is unlikely with the present state of artificial intelligence, but as we
overcome the challenges cited by the previous two researchers, we will no doubt take steps
forward towards a reality where these dangers could present themselves. In the event of such
exponential growth, we could encounter a number of dystopian situations which are often
portrayed in movies, such as a world overrun by sentient constructs or the internet being
infiltrated by a highly intelligent software system capable of hacking into databases, taking over
remote computers, or even launching missiles from secure sites. (CITATION NEEDED)
However,

before

artificially

intelligent

systems

become

superintelligent

human-destroying machines, there are more immediate risks of furthering the technology. As
described in a research venture by Carl Benedikt Frey, there is a growing risk that artificial
intelligence will replace monotonous labor jobs. As previously mentioned in Veliks paper, these
systems are particularly good at explicitly defined tasks (such as driving from point A to point B)
in uniform environments (such as a lane in a street). Some jobs that are particularly at risk of
being replaced by autonomy are: truck drivers, assembly workers, and, surprisingly, doctors.

(2013) IBMs Watson, a supercomputer which absorbs knowledge from the internet, recently
read through 20 million cancer research papers before generating a diagnosis for a cancer patient
who doctors were unable to diagnose for months. An even more impressive feat was that the
computer managed to generate this conclusion in a matter of ten minutes. (CITATION
NEEDED) This presents a vibrant future where machines offer more accurate diagnoses than
human doctors, but it also represents a grim possibility of human doctors being entirely replaced
by machines. If IBMs Watson can learn how to diagnose cancer in ten minutes, it may very well
be able to learn how to precisely operate on a patient in the coming years, rendering human
surgeons inefficient and costly.
These dangers are even more prevalent when combining artificial intelligence with code
compilation. If a program learns how to convert English into code, it can essentially produce
code on its own. This revisits the possibility of exponential intellectual growth, described in
Dietterichs research. The artificially intelligent program would be essentially taught how to
build code, and thus would have the capacity to build modules onto itself, improving its capacity
as a coding language. However, as it continues to do this, it could begin to escape the confines of
its purpose and work towards other goals.

Citations

1. Frost, Richard A. 2006. "Realization of Natural Language Interfaces Using


Lazy Functional Programming." ACM Computing Surveys 38, no. 4: 1-54.
Academic Search Complete, EBSCOhost (accessed October 12, 2016).
2. Velik,

Rosemarie.

2013.

"Brain-Like

Artificial

Intelligence

for

Automation - Foundations, Concepts and Implementation Examples." BRAIN:


Broad Research In Artificial Intelligence & Neuroscience 4, no. 1-4:
26-54. Academic Search Complete, EBSCOhost (accessed October 18, 2016).
3. Dietterich, Thomas G., and Eric J. Horvitz. 2015. "Rise of Concerns about
AI: Reflections and Directions." Communications Of The ACM 58, no. 10:
38-40. Academic Search Complete, EBSCOhost (accessed October 18, 2016).
4. GREEN, SPENCE, JEFFREY HEER, and CHRISTOPHER D. MANNING. 2015. "Natural
Language Translation at the Intersection of Al and HCl." Communications
Of

The

ACM

58,

no.

9:

48-53. Academic Search Complete, EBSCOhost

(accessed October 18, 2016).


5. Frey, Carl Benedikt, and Michael A. Osborne. "The future of employment:
how

susceptible are jobs to computerisation." Retrieved

September 7

(2013): 2013.
6. Ghahramani, Zoubin. 2015. "Probabilistic machine learning and artificial
intelligence." Nature 521, no. 7553: 452-459. Academic Search Complete,
EBSCOhost (accessed October 18, 2016).
7. Muggleton, Stephen. 2014. "Alan Turing and the development of Artificial
Intelligence." AI

Communications 27,

no.

1:

3-10.

Academic

Search

Complete, EBSCOhost (accessed October 18, 2016).


8. http://www.nydailynews.com/news/world/ibm-watson-proper-diagnosis-doctors
-stumped-article-1.2741857 (FORMATTING INCOMPLETE)

Você também pode gostar