Zobrazeno 1 - 10
of 177
pro vyhledávání: '"YAMPOLSKIY, ROMAN V."'
In this thorough study, we took a closer look at the skepticism that has arisen with respect to potential dangers associated with artificial intelligence, denoted as AI Risk Skepticism. Our study takes into account different points of view on the top
Externí odkaz:
http://arxiv.org/abs/2303.03885
Autor:
Brcic, Mario, Yampolskiy, Roman V.
Publikováno v:
ACM Computing Surveys, 2023
An impossibility theorem demonstrates that a particular problem or set of problems cannot be solved as described in the claim. Such theorems put limits on what is possible to do concerning artificial intelligence, especially the super-intelligent one
Externí odkaz:
http://arxiv.org/abs/2109.00484
Autor:
Burkhardt, Micah, Yampolskiy, Roman V.
Death has long been overlooked in evolutionary algorithms. Recent research has shown that death (when applied properly) can benefit the overall fitness of a population and can outperform sub-sections of a population that are "immortal" when allowed t
Externí odkaz:
http://arxiv.org/abs/2109.13744
Autor:
Yampolskiy, Roman V.
In this work, we survey skepticism regarding AI risk and show parallels with other types of scientific skepticism. We start by classifying different types of AI Risk skepticism and analyze their root causes. We conclude by suggesting some interventio
Externí odkaz:
http://arxiv.org/abs/2105.02704
As AI technologies increase in capability and ubiquity, AI accidents are becoming more common. Based on normal accident theory, high reliability theory, and open systems theory, we create a framework for understanding the risks associated with AI app
Externí odkaz:
http://arxiv.org/abs/2104.12582
Autor:
Yampolskiy, Roman V.
Invention of artificial general intelligence is predicted to cause a shift in the trajectory of human civilization. In order to reap the benefits and avoid pitfalls of such powerful technology it is important to be able to control it. However, possib
Externí odkaz:
http://arxiv.org/abs/2008.04071
Autor:
Yampolskiy, Roman V.
Terms Artificial General Intelligence (AGI) and Human-Level Artificial Intelligence (HLAI) have been used interchangeably to refer to the Holy Grail of Artificial Intelligence (AI) research, creation of a machine capable of achieving goals in a wide
Externí odkaz:
http://arxiv.org/abs/2007.07710
Autor:
Scott, Peter J., Yampolskiy, Roman V.
In this paper we examine historical failures of artificial intelligence (AI) and propose a classification scheme for categorizing future failures. By doing so we hope that (a) the responses to future failures can be improved through applying a system
Externí odkaz:
http://arxiv.org/abs/1907.07771