Popis: |
This article discusses the obligations of controllers with regard to automated decision making; must they provide individuals with information, an explanation or a justification? The answer is: all three. The main underlying rationales of EU data protection laws are preventing information inequality and information injustice. These rationales can only be served if controllers cannot hide behind algorithms for automated individual decision-making. In order to comply with their data protection obligations, controllers must design, develop, and apply algorithms in a transparent, predictable and verifiable manner. Controllers will be accountable for the outcome and will therefore have to be able to ultimately justify the criteria based on which automated decision-making takes place. The current academic debate whether individuals have a limited right to information only or also a right to an explanation, misses the bigger picture, with the risk that companies do the same. The article uses multidisciplinary sources from regulatory studies, law, and computer science to understand the multifaceted implications of algorithm accountability on the protection of personal data and the expectations that individuals may have thereof. The article concludes with an overview of the requirements of white box development. |