Zobrazeno 1 - 10
of 86
pro vyhledávání: '"Trédan, Gilles"'
The deployment of machine learning models in operational contexts represents a significant investment for any organisation. Consequently, the risk of these models being misappropriated by competitors needs to be addressed. In recent years, numerous p
Externí odkaz:
http://arxiv.org/abs/2412.13021
In a parallel with the 20 questions game, we present a method to determine whether two large language models (LLMs), placed in a black-box context, are the same or not. The goal is to use a small set of (benign) binary questions, typically under 20.
Externí odkaz:
http://arxiv.org/abs/2409.10338
Autor:
Merrer, Erwan Le, Tredan, Gilles
Publikováno v:
COMPLEX NETWORKS 2024
It is known that LLMs do hallucinate, that is, they return incorrect information as facts. In this paper, we introduce the possibility to study these hallucinations under a structured form: graphs. Hallucinations in this context are incorrect outputs
Externí odkaz:
http://arxiv.org/abs/2409.00159
Publikováno v:
2024 IEEE Conference on Secure and Trustworthy Machine Learning (SaTML)
Auditors need robust methods to assess the compliance of web platforms with the law. However, since they hardly ever have access to the algorithm, implementation, or training data used by a platform, the problem is harder than a simple metric estimat
Externí odkaz:
http://arxiv.org/abs/2402.09043
Autor:
de Vos, Martijn, Dhasade, Akash, Bourrée, Jade Garcia, Kermarrec, Anne-Marie, Merrer, Erwan Le, Rottembourg, Benoit, Tredan, Gilles
Existing work in fairness auditing assumes that each audit is performed independently. In this paper, we consider multiple agents working together, each auditing the same platform for different tasks. Agents have two levers: their collaboration strat
Externí odkaz:
http://arxiv.org/abs/2402.08522
Publikováno v:
Social Network Analysis and Mining (2023) 13:100
Numerous discussions have advocated the presence of a so called rabbit-hole (RH) phenomenon on social media, interested in advanced personalization to their users. This phenomenon is loosely understood as a collapse of mainstream recommendations, in
Externí odkaz:
http://arxiv.org/abs/2307.09986
Recent legislation required AI platforms to provide APIs for regulators to assess their compliance with the law. Research has nevertheless shown that platforms can manipulate their API answers through fairwashing. Facing this threat for reliable audi
Externí odkaz:
http://arxiv.org/abs/2305.13883
Modern communication networks feature fully decentralized flow rerouting mechanisms which allow them to quickly react to link failures. This paper revisits the fundamental algorithmic problem underlying such local fast rerouting mechanisms. Is it pos
Externí odkaz:
http://arxiv.org/abs/2204.03413
Algorithmic decision making is now widespread, ranging from health care allocation to more common actions such as recommendation or information ranking. The aim to audit these algorithms has grown alongside. In this paper, we focus on external audits
Externí odkaz:
http://arxiv.org/abs/2203.03711
Shadow banning consists for an online social network in limiting the visibility of some of its users, without them being aware of it. Twitter declares that it does not use such a practice, sometimes arguing about the occurrence of "bugs" to justify r
Externí odkaz:
http://arxiv.org/abs/2012.05101