Zobrazeno 1 - 10
of 26
pro vyhledávání: '"Samadi, Samira"'
Machine Learning (ML) models are increasingly used to support or substitute decision making. In applications where skilled experts are a limited resource, it is crucial to reduce their burden and automate decisions when the performance of an ML model
Externí odkaz:
http://arxiv.org/abs/2409.20489
Learn-to-Defer is a paradigm that enables learning algorithms to work not in isolation but as a team with human experts. In this paradigm, we permit the system to defer a subset of its tasks to the expert. Although there are currently systems that fo
Externí odkaz:
http://arxiv.org/abs/2407.12710
Machine learning (ML) models are increasingly used in various applications, from recommendation systems in e-commerce to diagnosis prediction in healthcare. In this paper, we present a novel dynamic framework for thinking about the deployment of ML m
Externí odkaz:
http://arxiv.org/abs/2405.13753
Collective action in machine learning is the study of the control that a coordinated group can have over machine learning algorithms. While previous research has concentrated on assessing the impact of collectives against Bayes (sub-)optimal classifi
Externí odkaz:
http://arxiv.org/abs/2405.06582
Counterfactual explanations provide individuals with cost-optimal actions that can alter their labels to desired classes. However, if substantial instances seek state modification, such individual-centric methods can lead to new competitions and unan
Externí odkaz:
http://arxiv.org/abs/2402.04579
With large language models (LLMs) like GPT-4 appearing to behave increasingly human-like in text-based interactions, it has become popular to attempt to evaluate personality traits of LLMs using questionnaires originally developed for humans. While r
Externí odkaz:
http://arxiv.org/abs/2311.05297
Despite the essential need for comprehensive considerations in responsible AI, factors like robustness, fairness, and causality are often studied in isolation. Adversarial perturbation, used to identify vulnerabilities in models, and individual fairn
Externí odkaz:
http://arxiv.org/abs/2310.19391
Causal Adversarial Perturbations for Individual Fairness and Robustness in Heterogeneous Data Spaces
Autor:
Ehyaei, Ahmad-Reza, Mohammadi, Kiarash, Karimi, Amir-Hossein, Samadi, Samira, Farnadi, Golnoosh
As responsible AI gains importance in machine learning algorithms, properties such as fairness, adversarial robustness, and causality have received considerable attention in recent years. However, despite their individual significance, there remains
Externí odkaz:
http://arxiv.org/abs/2308.08938
One of the goals of learning algorithms is to complement and reduce the burden on human decision makers. The expert deferral setting wherein an algorithm can either predict on its own or defer the decision to a downstream expert helps accomplish this
Externí odkaz:
http://arxiv.org/abs/2207.09584
Autor:
Kleindessner, Matthäus, Samadi, Samira, Zafar, Muhammad Bilal, Kenthapadi, Krishnaram, Russell, Chris
We initiate the study of fairness for ordinal regression. We adapt two fairness notions previously considered in fair ranking and propose a strategy for training a predictor that is approximately fair according to either notion. Our predictor has the
Externí odkaz:
http://arxiv.org/abs/2105.03153