AI Safety Subproblems for Software Engineering Researchers
Autor: | Gros, David, Devanbu, Prem, Yu, Zhou |
---|---|
Rok vydání: | 2023 |
Předmět: | |
Druh dokumentu: | Working Paper |
Popis: | In this 4-page manuscript we discuss the problem of long-term AI Safety from a Software Engineering (SE) research viewpoint. We briefly summarize long-term AI Safety, and the challenge of avoiding harms from AI as systems meet or exceed human capabilities, including software engineering capabilities (and approach AGI / "HLMI"). We perform a quantified literature review suggesting that AI Safety discussions are not common at SE venues. We make conjectures about how software might change with rising capabilities, and categorize "subproblems" which fit into traditional SE areas, proposing how work on similar problems might improve the future of AI and SE. Comment: Arxived Apr 2023. Update June 2023 to correct some typos and small text changes. Update Sept 2023, small typos/adjustment, adjust intro to clarify citation analysis focus on HLMI / advanced AI, rerun scripts and tweak handling of unknown venues, add TOSEM, de-anon github and acknowledgements |
Databáze: | arXiv |
Externí odkaz: |