Adopting trust as an ex post approach to privacy

Autor: Asgarinia, Haleh
Zdroj: AI and Ethics; 20240101, Issue: Preprints p1-14, 14p
Abstrakt: This research explores how a person with whom information has been shared and, importantly, an artificial intelligence (AI) system used to deduce information from the shared data contribute to making the disclosure context private. The study posits that private contexts are constituted by the interactions of individuals in the social context of intersubjectivity based on trust. Hence, to make the context private, the person who is the trustee (i.e., with whom information has been shared) must fulfil trust norms. According to the commitment account of trustworthiness, a person is trustworthy only if they satisfy the norm of competence. It is argued that a person using an AI system to answer a question is competent only if they are ex post justified in believing what has been delivered by the AI system. A person’s belief is justified in the doxastic sense only if the AI system is accurate. This feature of AI’s performance affects a person’s competence and, as a result, trustworthiness. The effect of AI on trust as an essential component of making the context private, and thus on privacy, means an AI system also impacts privacy. Therefore, a private context is constituted when the individual with whom the information is shared fulfils the competence norm and the AI system used for analysing the information is sufficiently accurate to adhere to this norm. The result of this research emphasises the significance of the relationship between individuals involved in information-sharing and how an AI system used for analysing that information impacts the relationship regarding making the context private, as well as how it impacts privacy. The findings of this research have significant implications for improving or ameliorating privacy regulations in light of trust.
Databáze: Supplemental Index