Value alignment: a formal approach
Autor: | Sierra, Carles, Osman, Nardine, Noriega, Pablo, Sabater-Mir, Jordi, Perelló, Antoni |
---|---|
Rok vydání: | 2021 |
Předmět: | |
Zdroj: | Responsible Artificial Intelligence Agents Workshop (RAIA) at AAMAS 2019 |
Druh dokumentu: | Working Paper |
Popis: | principles that should govern autonomous AI systems. It essentially states that a system's goals and behaviour should be aligned with human values. But how to ensure value alignment? In this paper we first provide a formal model to represent values through preferences and ways to compute value aggregations; i.e. preferences with respect to a group of agents and/or preferences with respect to sets of values. Value alignment is then defined, and computed, for a given norm with respect to a given value through the increase/decrease that it results in the preferences of future states of the world. We focus on norms as it is norms that govern behaviour, and as such, the alignment of a given system with a given value will be dictated by the norms the system follows. Comment: accepted paper at the Responsible Artificial Intelligence Agents Workshop, of the 18th International Conference on Autonomous Agents and MultiAgent Systems (AAMAS 2019) |
Databáze: | arXiv |
Externí odkaz: |