Zobrazeno 1 - 10
of 91
pro vyhledávání: '"Scott Cheng"'
Publikováno v:
Applied AI Letters, Vol 3, Iss 3, Pp n/a-n/a (2022)
Abstract In order to be useful, XAI explanations have to be faithful to the AI system they seek to elucidate and also interpretable to the people that engage with them. There exist multiple algorithmic methods for assessing faithfulness, but this is
Externí odkaz:
https://doaj.org/article/b1421fb61ce04236aee8c7205ff83e6b
Publikováno v:
Scientific Reports, Vol 11, Iss 1, Pp 1-17 (2021)
Abstract State-of-the-art deep-learning systems use decision rules that are challenging for humans to model. Explainable AI (XAI) attempts to improve human understanding but rarely accounts for how people typically reason about unfamiliar agents. We
Externí odkaz:
https://doaj.org/article/feea80d1ede04d4f84ba50138beb7648
Publikováno v:
Applied AI Letters, Vol 2, Iss 4, Pp n/a-n/a (2021)
Abstract Neural network architectures are achieving superhuman performance on an expanding range of tasks. To effectively and safely deploy these systems, their decision‐making must be understandable to a wide range of stakeholders. Methods to expl
Externí odkaz:
https://doaj.org/article/b7054395821e47d6a75952f952029cfe
Tabular data is common yet typically incomplete, small in volume, and access-restricted due to privacy concerns. Synthetic data generation offers potential solutions. Many metrics exist for evaluating the quality of synthetic tabular data; however, w
Externí odkaz:
http://arxiv.org/abs/2403.10424
Publikováno v:
eLife, Vol 6 (2017)
Externí odkaz:
https://doaj.org/article/be7bba5a5e9c4f92ac60c6697aafaa46
Publikováno v:
eLife, Vol 5 (2016)
Interpreting visual scenes typically requires us to accumulate information from multiple locations in a scene. Using a novel gaze-contingent paradigm in a visual categorization task, we show that participants' scan paths follow an active sensing stra
Externí odkaz:
https://doaj.org/article/fdf11e2df6a84b29ad24b6f8d80f0e3b
Publikováno v:
Molecular Systems Biology, Vol 6, Iss 1, Pp 1-13 (2010)
Abstract Microarrays are powerful tools to probe genome‐wide replication kinetics. The rich data sets that result contain more information than has been extracted by current methods of analysis. In this paper, we present an analytical model that in
Externí odkaz:
https://doaj.org/article/b3d354f14b2d4ae49bc16509b95cd103
The goal of explainable Artificial Intelligence (XAI) is to generate human-interpretable explanations, but there are no computationally precise theories of how humans interpret AI generated explanations. The lack of theory means that validation of XA
Externí odkaz:
http://arxiv.org/abs/2205.08452
Adversarial images highlight how vulnerable modern image classifiers are to perturbations outside of their training set. Human oversight might mitigate this weakness, but depends on humans understanding the AI well enough to predict when it is likely
Externí odkaz:
http://arxiv.org/abs/2106.09106
Publikováno v:
Proc. SPIE 11746, Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications III, 117462J (2021)
Limited expert time is a key bottleneck in medical imaging. Due to advances in image classification, AI can now serve as decision-support for medical experts, with the potential for great gains in radiologist productivity and, by extension, public he
Externí odkaz:
http://arxiv.org/abs/2106.04684