Zobrazeno 1 - 10
of 161
pro vyhledávání: '"Bellamy, Rachel"'
Autor:
Arya, Vijay, Bellamy, Rachel K. E., Chen, Pin-Yu, Dhurandhar, Amit, Hind, Michael, Hoffman, Samuel C., Houde, Stephanie, Liao, Q. Vera, Luss, Ronny, Mojsilovic, Aleksandra, Mourad, Sami, Pedemonte, Pablo, Raghavendra, Ramya, Richards, John, Sattigeri, Prasanna, Shanmugam, Karthikeyan, Singh, Moninder, Varshney, Kush R., Wei, Dennis, Zhang, Yunfeng
Publikováno v:
IAAI 2022
As artificial intelligence and machine learning algorithms become increasingly prevalent in society, multiple stakeholders are calling for these algorithms to provide explanations. At the same time, these stakeholders, whether they be affected citize
Externí odkaz:
http://arxiv.org/abs/2109.12151
Today, AI is increasingly being used in many high-stakes decision-making applications in which fairness is an important concern. Already, there are many examples of AI being biased and making questionable and unfair decisions. The AI research communi
Externí odkaz:
http://arxiv.org/abs/2002.01621
The wide adoption of Machine Learning technologies has created a rapidly growing demand for people who can train ML models. Some advocated the term "machine teacher" to refer to the role of people who inject domain knowledge into ML models. One promi
Externí odkaz:
http://arxiv.org/abs/2001.09219
Today, AI is being increasingly used to help human experts make decisions in high-stakes scenarios. In these scenarios, full automation is often undesirable, not only due to the significance of the outcome, but also because human experts can draw on
Externí odkaz:
http://arxiv.org/abs/2001.02114
Autor:
Arya, Vijay, Bellamy, Rachel K. E., Chen, Pin-Yu, Dhurandhar, Amit, Hind, Michael, Hoffman, Samuel C., Houde, Stephanie, Liao, Q. Vera, Luss, Ronny, Mojsilović, Aleksandra, Mourad, Sami, Pedemonte, Pablo, Raghavendra, Ramya, Richards, John, Sattigeri, Prasanna, Shanmugam, Karthikeyan, Singh, Moninder, Varshney, Kush R., Wei, Dennis, Zhang, Yunfeng
As artificial intelligence and machine learning algorithms make further inroads into society, calls are increasing from multiple stakeholders for these algorithms to explain their outputs. At the same time, these stakeholders, whether they be affecte
Externí odkaz:
http://arxiv.org/abs/1909.03012
Ensuring fairness of machine learning systems is a human-in-the-loop process. It relies on developers, users, and the general public to identify fairness problems and make improvements. To facilitate the process we need effective, unbiased, and user-
Externí odkaz:
http://arxiv.org/abs/1901.07694
Autor:
Mallinar, Neil, Shah, Abhishek, Ugrani, Rajendra, Gupta, Ayush, Gurusankar, Manikandan, Ho, Tin Kam, Liao, Q. Vera, Zhang, Yunfeng, Bellamy, Rachel K. E., Yates, Robert, Desmarais, Chris, McGregor, Blake
Many conversational agents in the market today follow a standard bot development framework which requires training intent classifiers to recognize user input. The need to create a proper set of training examples is often the bottleneck in the develop
Externí odkaz:
http://arxiv.org/abs/1812.06176
Autor:
Bellamy, Rachel K. E., Dey, Kuntal, Hind, Michael, Hoffman, Samuel C., Houde, Stephanie, Kannan, Kalapriya, Lohia, Pranay, Martino, Jacquelyn, Mehta, Sameep, Mojsilovic, Aleksandra, Nagar, Seema, Ramamurthy, Karthikeyan Natesan, Richards, John, Saha, Diptikalyan, Sattigeri, Prasanna, Singh, Moninder, Varshney, Kush R., Zhang, Yunfeng
Fairness is an increasingly important concern as machine learning models are used to support decision making in high-stakes applications such as mortgage lending, hiring, and prison sentencing. This paper introduces a new open source Python toolkit f
Externí odkaz:
http://arxiv.org/abs/1810.01943
Autor:
Arnold, Matthew, Bellamy, Rachel K. E., Hind, Michael, Houde, Stephanie, Mehta, Sameep, Mojsilovic, Aleksandra, Nair, Ravi, Ramamurthy, Karthikeyan Natesan, Reimer, Darrell, Olteanu, Alexandra, Piorkowski, David, Tsay, Jason, Varshney, Kush R.
Accuracy is an important concern for suppliers of artificial intelligence (AI) services, but considerations beyond accuracy, such as safety (which includes fairness and explainability), security, and provenance, are also critical elements to engender
Externí odkaz:
http://arxiv.org/abs/1808.07261
Autor:
Chakraborti, Tathagata, Fadnis, Kshitij P., Talamadupula, Kartik, Dholakia, Mishal, Srivastava, Biplav, Kephart, Jeffrey O., Bellamy, Rachel K. E.
In this paper, we report on the visualization capabilities of an Explainable AI Planning (XAIP) agent that can support human in the loop decision making. Imposing transparency and explainability requirements on such agents is especially important in
Externí odkaz:
http://arxiv.org/abs/1709.04517