Learning Fair Cooperation in Mixed-Motive Games with Indirect Reciprocity
Autor: | Smit, Martin, Santos, Fernando P. |
---|---|
Rok vydání: | 2024 |
Předmět: | |
Zdroj: | Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence Main Track (2024). Pages 220-228 |
Druh dokumentu: | Working Paper |
DOI: | 10.24963/ijcai.2024/25 |
Popis: | Altruistic cooperation is costly yet socially desirable. As a result, agents struggle to learn cooperative policies through independent reinforcement learning (RL). Indirect reciprocity, where agents consider their interaction partner's reputation, has been shown to stabilise cooperation in homogeneous, idealised populations. However, more realistic settings are comprised of heterogeneous agents with different characteristics and group-based social identities. We study cooperation when agents are stratified into two such groups, and allow reputation updates and actions to depend on group information. We consider two modelling approaches: evolutionary game theory, where we comprehensively search for social norms (i.e., rules to assign reputations) leading to cooperation and fairness; and RL, where we consider how the stochastic dynamics of policy learning affects the analytically identified equilibria. We observe that a defecting majority leads the minority group to defect, but not the inverse. Moreover, changing the norms that judge in and out-group interactions can steer a system towards either fair or unfair cooperation. This is made clearer when moving beyond equilibrium analysis to independent RL agents, where convergence to fair cooperation occurs with a narrower set of norms. Our results highlight that, in heterogeneous populations with reputations, carefully defining interaction norms is fundamental to tackle both dilemmas of cooperation and of fairness. Comment: Main text (9 pages, 6 figures) and appendix (7 pages, 4 figures) |
Databáze: | arXiv |
Externí odkaz: |