Zobrazeno 1 - 10
of 280
pro vyhledávání: '"Tsang, Michael P."'
The fundamental problem with ultrasound-guided diagnosis is that the acquired images are often 2-D cross-sections of a 3-D anatomy, potentially missing important anatomical details. This limitation leads to challenges in ultrasound echocardiography,
Externí odkaz:
http://arxiv.org/abs/2409.09680
Autor:
Luo, Liang, Zhang, Buyun, Tsang, Michael, Ma, Yinbin, Chu, Ching-Hsiang, Chen, Yuxin, Li, Shen, Hao, Yuchen, Zhao, Yanli, Lakshminarayanan, Guna, Wen, Ellie Dingqiao, Park, Jongsoo, Mudigere, Dheevatsa, Naumov, Maxim
We study a mismatch between the deep learning recommendation models' flat architecture, common distributed training paradigm and hierarchical data center topology. To address the associated inefficiencies, we propose Disaggregated Multi-Tower (DMT),
Externí odkaz:
http://arxiv.org/abs/2403.00877
Autor:
Vaseli, Hooman, Gu, Ang Nan, Amiri, S. Neda Ahmadi, Tsang, Michael Y., Fung, Andrea, Kondori, Nima, Saadat, Armin, Abolmaesumi, Purang, Tsang, Teresa S. M.
Aortic stenosis (AS) is a common heart valve disease that requires accurate and timely diagnosis for appropriate treatment. Most current automatic AS severity detection methods rely on black-box models with a low level of trustworthiness, which hinde
Externí odkaz:
http://arxiv.org/abs/2307.14433
Autor:
Zhang, Buyun, Luo, Liang, Liu, Xi, Li, Jay, Chen, Zeliang, Zhang, Weilin, Wei, Xiaohan, Hao, Yuchen, Tsang, Michael, Wang, Wenjun, Liu, Yang, Li, Huayu, Badr, Yasmine, Park, Jongsoo, Yang, Jiyan, Mudigere, Dheevatsa, Wen, Ellie
Learning feature interactions is important to the model performance of online advertising services. As a result, extensive efforts have been devoted to designing effective architectures to learn feature interactions. However, we observe that the prac
Externí odkaz:
http://arxiv.org/abs/2203.11014
Interpretation of deep learning models is a very challenging problem because of their large number of parameters, complex connections between nodes, and unintelligible feature representations. Despite this, many view interpretability as a key solutio
Externí odkaz:
http://arxiv.org/abs/2103.03103
Autor:
Jafari, Mohammad H., Luong, Christina, Tsang, Michael, Gu, Ang Nan, Van Woudenberg, Nathan, Rohling, Robert, Tsang, Teresa, Abolmaesumi, Purang
This paper presents U-LanD, a framework for joint detection of key frames and landmarks in videos. We tackle a specifically challenging problem, where training labels are noisy and highly sparse. U-LanD builds upon a pivotal observation: a deep Bayes
Externí odkaz:
http://arxiv.org/abs/2102.01586
In this paper we propose a novel human-centered approach for detecting forgery in face images, using dynamic prototypes as a form of visual explanations. Currently, most state-of-the-art deepfake detections are based on black-box models that process
Externí odkaz:
http://arxiv.org/abs/2006.15473
Recommendation is a prevalent application of machine learning that affects many users; therefore, it is important for recommender models to be accurate and interpretable. In this work, we propose a method to both interpret and augment the predictions
Externí odkaz:
http://arxiv.org/abs/2006.10966
Machine learning transparency calls for interpretable explanations of how inputs relate to predictions. Feature attribution is a way to analyze the impact of features on predictions. Feature interactions are the contextual dependence between features
Externí odkaz:
http://arxiv.org/abs/2006.10965
In an attempt to gather a deeper understanding of how convolutional neural networks (CNNs) reason about human-understandable concepts, we present a method to infer labeled concept data from hidden layer activations and interpret the concepts through
Externí odkaz:
http://arxiv.org/abs/1906.04664