Popis: |
Although the debate on AI regulation is still fluid at a global level and the European initiatives are in their early stages, three possible approaches to grounding AI regulation on human rights are emerging. One option is a principles-based approach, comprising guiding principles derived from existing binding and non-binding international human rights instruments, which could provide a comprehensive framework for AI. A different approach focuses more narrowly on the impacts of AI on individual rights and their safeguarding through rights-based risk assessment. This is the path followed by the Council of Europe in its ongoing work on AI regulation. Finally, as outlined in the EU proposal, greater emphasis can be placed on managing high-risk applications by focusing on product safety and conformity assessment. Despite the differences between these three models, they all share a core concern with protecting human rights, recognised as a key issue in all of them. However, in these proposals for AI regulation, the emphasis on risk management is not accompanied by effective models for assessing the impact of AI on human rights. Analysis of the current debate therefore confirms that the HRESIA could not only be an effective response to human-rights oriented AI development that also encompasses societal values, but it could also bridge a gap in the current regulatory proposals. |