Understanding, Assessing, and Mitigating Safety Risks in Artificial Intelligence Systems

Autor: Kroll, Joshua A., Berzins, Valdis
Přispěvatelé: Naval Postgraduate School, Computer Science (CS)
Rok vydání: 2022
Předmět:
Popis: Prepared for: Naval Air Warfare Development Center (NAVAIR) Traditional software safety techniques rely on validating software against a deductively defined specification of how the software should behave in particular situations. In the case of AI systems, specifications are often implicit or inductively defined. Data-driven methods are subject to sampling error since practical datasets cannot provide exhaustive coverage of all possible events in a real physical environment. Traditional software verification and validation approaches may not apply directly to these novel systems, complicating the operation of systems safety analysis (such as implemented in MIL-STD 882). However, AI offers advanced capabilities, and it is desirable to ensure the safety of systems that rely on these capabilities. When AI tech is deployed in a weapon system, robot, or planning system, unwanted events are possible. Several techniques can support the evaluation process for understanding the nature and likelihood of unwanted events in AI systems and making risk decisions on naval employment. This research considers the state of the art, evaluating which ones are most likely to be employable, usable, and correct. Techniques include software analysis, simulation environments, and mathematical determinations. Naval Air Warfare Development Center Naval Postgraduate School, Naval Research Program (PE 0605853N/2098) Approved for public release. Distribution is unlimited
Databáze: OpenAIRE