Zobrazeno 1 - 4
of 4
pro vyhledávání: '"Murthi, Anupama"'
Understanding model performance on unlabeled data is a fundamental challenge of developing, deploying, and maintaining AI systems. Model performance is typically evaluated using test sets or periodic manual quality assessments, both of which require
Externí odkaz:
http://arxiv.org/abs/2012.08625
Autor:
Taskazan, Begum, Navratil, Jiri, Arnold, Matthew, Murthi, Anupama, Venkataraman, Ganesh, Elder, Benjamin
Building and maintaining high-quality test sets remains a laborious and expensive task. As a result, test sets in the real world are often not properly kept up to date and drift from the production traffic they are supposed to represent. The frequenc
Externí odkaz:
http://arxiv.org/abs/2007.05499
Autor:
Arnold, Matthew, Boston, Jeffrey, Desmond, Michael, Duesterwald, Evelyn, Elder, Benjamin, Murthi, Anupama, Navratil, Jiri, Reimer, Darrell
Today's AI deployments often require significant human involvement and skill in the operational stages of the model lifecycle, including pre-release testing, monitoring, problem diagnosis and model improvements. We present a set of enabling technolog
Externí odkaz:
http://arxiv.org/abs/2003.12808
Autor:
Duesterwald, Evelyn, Murthi, Anupama, Venkataraman, Ganesh, Sinn, Mathieu, Vijaykeerthy, Deepak
Publikováno v:
Safe Machine Learning Workshop at ICLR (International Conference on Learning Representations), 2019
Adversarial training shows promise as an approach for training models that are robust towards adversarial perturbation. In this paper, we explore some of the practical challenges of adversarial training. We present a sensitivity analysis that illustr
Externí odkaz:
http://arxiv.org/abs/1905.03837