Zobrazeno 1 - 10
of 41
pro vyhledávání: '"ALBERT, KENDRA"'
Autor:
Albert, Kendra, Delano, Maggie
False assumptions about sex and gender are deeply embedded in the medical system, including that they are binary, static, and concordant. Machine learning researchers must understand the nature of these assumptions in order to avoid perpetuating them
Externí odkaz:
http://arxiv.org/abs/2203.08227
Attacks from adversarial machine learning (ML) have the potential to be used "for good": they can be used to run counter to the existing power structures within ML, creating breathing space for those who would otherwise be the targets of surveillance
Externí odkaz:
http://arxiv.org/abs/2107.10302
Autor:
Albert, Kendra, Delano, Maggie
Smart weight scales offer bioimpedance-based body composition analysis as a supplement to pure body weight measurement. Companies such as Withings and Fitbit tout composition analysis as providing self-knowledge and the ability to make more informed
Externí odkaz:
http://arxiv.org/abs/2101.08325
This paper critically assesses the adequacy and representativeness of physical domain testing for various adversarial machine learning (ML) attacks against computer vision systems involving human subjects. Many papers that deploy such attacks charact
Externí odkaz:
http://arxiv.org/abs/2012.02048
Adversarial Machine Learning is booming with ML researchers increasingly targeting commercial ML systems such as those used in Facebook, Tesla, Microsoft, IBM, Google to demonstrate vulnerabilities. In this paper, we ask, "What are the potential lega
Externí odkaz:
http://arxiv.org/abs/2006.16179
In addition to their security properties, adversarial machine-learning attacks and defenses have political dimensions. They enable or foreclose certain options for both the subjects of the machine learning systems and for those who deploy them, creat
Externí odkaz:
http://arxiv.org/abs/2002.05648
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
In the last two years, more than 200 papers have been written on how machine learning (ML) systems can fail because of adversarial attacks on the algorithms and data; this number balloons if we were to incorporate papers covering non-adversarial fail
Externí odkaz:
http://arxiv.org/abs/1911.11034
Autor:
Albert, Kendra1 kalbert@law.harvard.edu, Grimmelmann, James2 james.grimmelmann@cornell.edu
Publikováno v:
Communications of the ACM. May2023, Vol. 66 Issue 5, p18-20. 3p. 1 Color Photograph.
When machine learning systems fail because of adversarial manipulation, how should society expect the law to respond? Through scenarios grounded in adversarial ML literature, we explore how some aspects of computer crime, copyright, and tort law inte
Externí odkaz:
http://arxiv.org/abs/1810.10731