Zobrazeno 1 - 10
of 16
pro vyhledávání: '"Zijie J. Wang"'
Publikováno v:
Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems.
Machine learning (ML) recourse techniques are increasingly used in high-stakes domains, providing end users with actions to alter ML predictions, but they assume ML developers understand what input variables can be changed. However, a recourse plan's
Machine learning (ML) models can fail in unexpected ways in the real world, but not all model failures are equal. With finite time and resources, ML practitioners are forced to prioritize their model debugging and improvement efforts. Through intervi
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::fdf4ee7f90f8895339bac4fc71e96dba
http://arxiv.org/abs/2304.05967
http://arxiv.org/abs/2304.05967
Autor:
Zijie J. Wang, Duen Horng Chau
As machine learning (ML) is increasingly integrated into our everyday Web experience, there is a call for transparent and explainable web-based ML. However, existing explainability techniques often require dedicated backend servers, which limit their
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::2b0d1b9948f918c3c03121d88d1760c8
Autor:
Sivapriya Vellaichamy, Matthew Hull, Zijie J. Wang, Nilaksh Das, ShengYun Peng, Haekyu Park, Duen Horng Polo Chau
Publikováno v:
2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
Autor:
Seongmin Lee, Sadia Afroz, Haekyu Park, Zijie J. Wang, Omar Shaikh, Vibhor Sehgal, Ankit Peshin, Duen Horng Chau
Publikováno v:
CHI Conference on Human Factors in Computing Systems Extended Abstracts.
Autor:
David Munechika, Zijie J. Wang, Jack Reidy, Josh Rubin, Krishna Gade, Krishnaram Kenthapadi, Duen Horng Chau
As machine learning (ML) systems become increasingly widespread, it is necessary to audit these systems for biases prior to their deployment. Recent research has developed algorithms for effectively identifying intersectional bias in the form of inte
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::6d8fc7cf8c550df7f43bf5bfe752d3ab
Autor:
Zijie J. Wang, Alex Kale, Harsha Nori, Peter Stella, Mark E. Nunnally, Duen Horng Chau, Mihaela Vorvoreanu, Jennifer Wortman Vaughan, Rich Caruana
Machine learning (ML) interpretability techniques can reveal undesirable patterns in data that models exploit to make predictions--potentially causing harms once deployed. However, how to take action to address these patterns is not always clear. In
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::b8e96183869b4a40fc870bce5dd0f96d
Autor:
Seongmin Lee, Sadia Afroz, Haekyu Park, Zijie J. Wang, Omar Shaikh, Vibhor Sehgal, Ankit Peshin, Duen Horng Chau
As the information on the Internet continues growing exponentially, understanding and assessing the reliability of a website is becoming increasingly important. Misinformation has far-ranging repercussions, from sowing mistrust in media to underminin
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::5b2b3f6dcc1d581a1566fe2538354490
Publikováno v:
Scopus-Elsevier
Why do large pre-trained transformer-based models perform so well across a wide variety of NLP tasks? Recent research suggests the key may lie in multi-headed attention mechanism's ability to learn and represent linguistic information. Understanding
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::bb6bbeb6d532f55020749ca943c7e8fc
Publikováno v:
IEEE BigData
Deep learning models are being integrated into a wide range of high-impact, security-critical systems, from self-driving cars to medical diagnosis. However, recent research has demonstrated that many of these deep learning architectures are vulnerabl