Zobrazeno 1 - 10
of 166
pro vyhledávání: '"de Alfaro, Luca"'
Concept drift is a common phenomenon in data streams where the statistical properties of the target variable change over time. Traditionally, drift is assumed to occur globally, affecting the entire dataset uniformly. However, this assumption does no
Externí odkaz:
http://arxiv.org/abs/2408.14687
The ability to detect and adapt to changes in data distributions is crucial to maintain the accuracy and reliability of machine learning models. Detection is generally approached by observing the drift of model performance from a global point of view
Externí odkaz:
http://arxiv.org/abs/2408.14682
We prove that Fisher-Rao natural gradient descent (FR-NGD) optimally approximates the continuous time replicator equation (an essential model of evolutionary dynamics), and term this correspondence "conjugate natural selection". This correspondence p
Externí odkaz:
http://arxiv.org/abs/2208.13898
Publikováno v:
In Responsible AI @ KDD 2021 Workshop, 2021
When analyzing the behavior of machine learning algorithms, it is important to identify specific data subgroups for which the considered algorithm shows different performance with respect to the entire dataset. The intervention of domain experts is n
Externí odkaz:
http://arxiv.org/abs/2108.07450
Autor:
Agrawal, Rakshit, de Alfaro, Luca
Graph edges, along with their labels, can represent information of fundamental importance, such as links between web pages, friendship between users, the rating given by users to other users or items, and much more. We introduce LEAP, a trainable, ge
Externí odkaz:
http://arxiv.org/abs/1903.04613
Autor:
Agrawal, Rakshit, de Alfaro, Luca, Ballarin, Gabriele, Moret, Stefano, Di Pierro, Massimo, Tacchini, Eugenio, Della Vedova, Marco L.
Social networks offer a ready channel for fake and misleading news to spread and exert influence. This paper examines the performance of different reputation algorithms when applied to a large and statistically significant portion of the news that ar
Externí odkaz:
http://arxiv.org/abs/1902.07207
Adversarial attacks add perturbations to the input features with the intent of changing the classification produced by a machine learning system. Small perturbations can yield adversarial examples which are misclassified despite being virtually indis
Externí odkaz:
http://arxiv.org/abs/1902.01208
Autor:
de Alfaro, Luca
In adversarial attacks to machine-learning classifiers, small perturbations are added to input that is correctly classified. The perturbations yield adversarial examples, which are virtually indistinguishable from the unperturbed input, and yet are m
Externí odkaz:
http://arxiv.org/abs/1809.09262
Autor:
de Alfaro, Luca, Di Pierro, Massimo, Agrawal, Rakshit, Tacchini, Eugenio, Ballarin, Gabriele, Della Vedova, Marco L., Moret, Stefano
Social networks offer a ready channel for fake and misleading news to spread and exert influence. This paper examines the performance of different reputation algorithms when applied to a large and statistically significant portion of the news that ar
Externí odkaz:
http://arxiv.org/abs/1802.08066
Autor:
Tacchini, Eugenio, Ballarin, Gabriele, Della Vedova, Marco L., Moret, Stefano, de Alfaro, Luca
Publikováno v:
Proceedings of the Second Workshop on Data Science for Social Good (SoGood), Skopje, Macedonia, 2017. CEUR Workshop Proceedings Volume 1960, 2017
In recent years, the reliability of information on the Internet has emerged as a crucial issue of modern society. Social network sites (SNSs) have revolutionized the way in which information is spread by allowing users to freely share content. As a c
Externí odkaz:
http://arxiv.org/abs/1704.07506