Zobrazeno 1 - 10
of 75
pro vyhledávání: '"Chung, Neo Christopher"'
Autor:
Chung, Neo Christopher, Chung, Hongkyou, Lee, Hearim, Brocki, Lennart, Chung, Hongbeom, Dyer, George
A cautious interpretation of AI regulations and policy in the EU and the USA place explainability as a central deliverable of compliant AI systems. However, from a technical perspective, explainable AI (XAI) remains an elusive and complex target wher
Externí odkaz:
http://arxiv.org/abs/2405.03820
Importance estimators are explainability methods that quantify feature importance for deep neural networks (DNN). In vision transformers (ViT), the self-attention mechanism naturally leads to attention maps, which are sometimes interpreted as importa
Externí odkaz:
http://arxiv.org/abs/2312.02364
The global mental health crisis is looming with a rapid increase in mental disorders, limited resources, and the social stigma of seeking treatment. As the field of artificial intelligence (AI) has witnessed significant advancements in recent years,
Externí odkaz:
http://arxiv.org/abs/2311.13857
Publikováno v:
Cancers. 2023; 15(9):2459
Despite the unprecedented performance of deep neural networks (DNNs) in computer vision, their practical application in the diagnosis and prognosis of cancer using medical imaging has been limited. One of the critical challenges for integrating diagn
Externí odkaz:
http://arxiv.org/abs/2303.11177
Publikováno v:
ICLR 2023 Workshop on Trustworthy ML; Full Paper in Pattern Recognition Letters
Post-hoc explanation methods attempt to make the inner workings of deep neural networks more interpretable. However, since a ground truth is in general lacking, local post-hoc interpretability methods, which assign importance scores to input features
Externí odkaz:
http://arxiv.org/abs/2303.01538
Publikováno v:
6th International Workshop on Dialog Systems (IWDS); 10th IEEE International Conference on Big Data and Smart Computing (2022 BigComp)
Mental health counseling remains a major challenge in modern society due to cost, stigma, fear, and unavailability. We posit that generative artificial intelligence (AI) models designed for mental health counseling could help improve outcomes by lowe
Externí odkaz:
http://arxiv.org/abs/2301.09412
Autor:
Brocki, Lennart, Marchadour, Wistan, Maison, Jonas, Badic, Bogdan, Papadimitroulas, Panagiotis, Hatt, Mathieu, Vermet, Franck, Chung, Neo Christopher
Publikováno v:
2022 EXTRAAMAS 2022, Lecture Notes in Computer Science (LNAI, volume 13283)
Deep learning has shown superb performance in detecting objects and classifying images, ensuring a great promise for analyzing medical imaging. Translating the success of deep learning to medical imaging, in which doctors need to understand the under
Externí odkaz:
http://arxiv.org/abs/2209.15398
Despite excellent performance of deep neural networks (DNNs) in image classification, detection, and prediction, characterizing how DNNs make a given decision remains an open problem, resulting in a number of interpretability methods. Post-hoc interp
Externí odkaz:
http://arxiv.org/abs/2203.02928
Autor:
Chung, Neo Christopher
Artificial intelligence (AI) is increasingly utilized in synthesizing visuals, texts, and audio. These AI-based works, often derived from neural networks, are entering the mainstream market, as digital paintings, songs, books, and others. We conceptu
Externí odkaz:
http://arxiv.org/abs/2110.03569
Interpretation and improvement of deep neural networks relies on better understanding of their underlying mechanisms. In particular, gradients of classes or concepts with respect to the input features (e.g., pixels in images) are often used as import
Externí odkaz:
http://arxiv.org/abs/2011.05002