Você está na página 1de 2

In the future, I can see artificial intelligence running on every single electronic device that

human interact with on a daily basis. In the future I can see things like a smartphone, performing
all of the tasks on the phone for a person. What I mean by this, is that if I told my phone my
phone to do multiple different things at once, my phone would perform those tasks, but in a way
that I would. An example of this would be if I told the AI on my phone to reply to all my texts,
the AI would reply to the texts in the same way that I do, so the recipient would believe that I did
it myself. The AI itself would also feel emotions to help it make its own decisions.
There are many reasons why my AI does not exist fully today. Recently google’s AI
development team produced a VEE (Virtual Emotion Engine) for their assistant. The reason why
the VEE is not my future technology is because it has not been completed fully. My AI will use
the five senses to help it make all of its decisions, but google’s VEE only uses hearing. The other
senses will help the AI make its decisions, because it will give the AI more data, which in turn
makes the AI more accurate. Unfortunately taste and smell are hard to replicate, and this
technology would have to improve, so my AI could be more accurate when making decisions.
The other problem is the human factor. All people feel emotions differently, and this causes
differences in the AI’s dataset. This is why my AI would use data that was received from people
that would be classified medically, as Mentally stable with no abnormalities. I also would
program my AI with a large group of people as my dataset. If there are any outliers, the AI
would recognize this, because the dataset would be large enough for the AI to recognize the
outlier. When the AI finds the outliers it will not use that data unless the programmer of the AI
tells it to, as a policy. To make the AI operate successfully, there has to be principles that is has
to follow.
The scientific principles that my AI would follow is cause and effect logic, laws of emotion,
the three laws of robotics, and the laws of ethics. The cause and effect logic is used in the
algorithms that control the AI, which in turn make the AI think like a logical human. The three
laws of robotics are: 1)A robot may not injure a human being or, through inaction, allow a
human being to come to harm. 2)A robot must obey orders given it by human beings except
where such orders would conflict with the First Law. 3)A robot must protect its own existence as
long as such protection does not conflict with the First or Second Law. These laws are very self
explanatory, because obviously we want to protect the human race. The laws of emotion are used
in order to help my AI make good decisions on how it should feel. Lastly the laws of ethics
would be used to make sure that my AI has a good morality, and to make sure that my AI will
care for all living organisms.
The way that I would test my AI is the same people that I used for the dataset, would talk to
the AI, and then they would report back to me how the feel about the AI. When all of the
members from the dataset give me the O.K., then I will talk and interact with the AI myself.
After that I would have the AI interact with more people that were not part of the testing
originally, and once they say the AI is ready then, I will test it in the real world. I will never
allow my AI to interact with human in a unsupervised manor, and what I mean by this is if
someone is talking to the AI another human must be present. The reason for the drastic measures
is because, if there is an error in my programming (which is probable due to the complexity of
programming an AI, and the human factor), I have a kill switch on the AI’s computer that will
turn it off, so no one ever gets hurt.

Você também pode gostar