Você está na página 1de 4

Annotated Bibliography

G063528
QING LIU

Armstrong, S. and Sotala, K. 2015, ‘How we’re predicting AI–or failing to’, Beyond Artificial
Intelligence, pp. 11-29, Springer International Publishing.

In this article, the authors illustrate the reliability of existing timeline predictions about Artificial Intelligence
by analysing their different factors such as the type and the method. The authors argue that these timeline
predictions are almost based on expert judgment and lack of enough reliability. The authors extract a
database from 257 AI predictions which focus on AI timeline and compare it with non-expert predictions and
past failed predictions, proving the fact that the three different kinds of predictions have high likeness. The
article concludes by emphasising the fact that these existing predictions lack of reliability due to the
increasing uncertainty of AI.

This article is quite a thorough analysis full of logic and data references. Although it refers only limited data
models, it contributes a scientific and objective means to analyse the further development of AI.

Sotala, K. and Yampolskiy, R.V., 2014, ‘Responses to catastrophic AGI risk: a survey’, Physica
Scripta, 90(1), p.018001.

This article analyses the possibility of AGI’s appearance and relevant potential risks. In this article, the
authors argue that AGI will be completed within 100 years, while we may benefit from AGI’s excellent
creation, it will be a disaster if AGI is against human values. In this view, the authors provide some viable
approaches to address this issue such as limiting the access between AGI and external world, simplifying
AGI’s functions and restraining its motivation. The article concludes by emphasising that AGI’s appearance
is a inevitable trend, therefore, the authors recommend it is necessary for us to consider about some viable
solutions to hinder the risk.

This article provides some valuable illustrations about AGI and some viable suggestions to address the
potential risk resulted from it. While there is a limitation that the approach evaluations are only based on the
theoretical analysis, it is still useful because it gave me valuable insight into the future of AGI.

Luxton, D.D. and Anderson, S.L. and Anderson, M. 2015, ‘Ethical Issues and Artificial Intelligence
Technologies in Behavioral and Mental Health Care’, Artificial Intelligence in Behavioral and
Health Care, p.255.

In the article, the authors demonstrate some potential ethical issues of AI applications in medical area. The
authors argue that as AI applications develop in health care, it will cause some urgent ethical issues involving
the patient privacy, safety, autonomy and trust, and they state that these issues result from the misuse from
human and imperfection in the design of applications.In light of this, the authors suggest the relevant
organisations should establish conventions to limit unethical or illegal AI projects.The article concludes by
emphasising the fact that to address these ethical issues, it is important to improve the public awareness of AI
technologies.

This article provides a thorough analysis about potential ethical issues of AI technologies. While the predic-
tions and the analysis lack of data evidence, I still approve its value, because its logical analysis and objec-
tive predictions provided me a useful theoretical basis for the further study about AI technologies.

Você também pode gostar