Popis: |
Human beings might face significant security risks after entering into the artificial general intelligence (AGI) era. By summarizing the difference between AGI and traditional artificial intelligence, we analyze the sources of the security risks of AGI from the aspects of model uninterpretability, unreliability of algorithms and hardware, and uncontrollability over autonomous consciousness. Moreover, we propose a security risk assessment system for AGI from the aspects of ability, motivation, and behavior. Subsequently, we discuss the defense countermeasures in the research and application stages. In the research stage, theoretical verification should be improved to develop interpretable models, the basic values of AGI should be rigorously constrained, and technologies should be standardized. In the application stage, man-made risks should be prevented, motivations should be selected for AGI, and human values should be given to AGI. Furthermore, it is necessary to strengthen international cooperation and the education of AGI professionals, to well prepare for the unknown coming era of AGI. |