Zobrazeno 1 - 10
of 60
pro vyhledávání: '"Jang, Jongseong"'
Autor:
Kim, Jong Hyun, Jang, Jongseong
The application of machine learning to transcriptomics data has led to significant advances in cancer research. However, the high dimensionality and complexity of RNA sequencing (RNA-seq) data pose significant challenges in pan-cancer studies. This s
Externí odkaz:
http://arxiv.org/abs/2408.07233
Recent advancements in digital pathology have led to the development of numerous foundational models that utilize self-supervised learning on patches extracted from gigapixel whole slide images (WSIs). While this approach leverages vast amounts of un
Externí odkaz:
http://arxiv.org/abs/2408.00380
Autor:
Jang, Jongseong1 (AUTHOR), Kyung, Daeun2 (AUTHOR), Kim, Seung Hwan1 (AUTHOR), Lee, Honglak1 (AUTHOR), Bae, Kyunghoon1 (AUTHOR), Choi, Edward2 (AUTHOR) edwardchoi@kaist.ac.kr
Publikováno v:
Scientific Reports. 10/5/2024, Vol. 14 Issue 1, p1-11. 11p.
Publikováno v:
Sci Rep 14, 23199 (2024)
Deep neural networks are increasingly used in medical imaging for tasks such as pathological classification, but they face challenges due to the scarcity of high-quality, expert-labeled training data. Recent efforts have utilized pre-trained contrast
Externí odkaz:
http://arxiv.org/abs/2212.07050
Publikováno v:
Journal of Visual Communication and Image Representation 2023
Weakly supervised semantic segmentation (WSSS) with only image-level supervision is a challenging task. Most existing methods exploit Class Activation Maps (CAM) to generate pixel-level pseudo labels for supervised training. However, due to the local
Externí odkaz:
http://arxiv.org/abs/2203.07239
Autor:
Zhang, Zhibo, Jang, Jongseong, Trabelsi, Chiheb, Li, Ruiwen, Sanner, Scott, Jeong, Yeonjeong, Shim, Dongsub
Contrastive learning has led to substantial improvements in the quality of learned embedding representations for tasks such as image classification. However, a key drawback of existing contrastive augmentation methods is that they may lead to the mod
Externí odkaz:
http://arxiv.org/abs/2111.14271
Despite the success of deep learning in computer vision, algorithms to recognize subtle and small objects (or regions) is still challenging. For example, recognizing a baseball or a frisbee on a ground scene or a bone fracture in an X-ray image can e
Externí odkaz:
http://arxiv.org/abs/2111.13233
Autor:
Li, Ruiwen, Zhang, Zhibo, Li, Jiani, Trabelsi, Chiheb, Sanner, Scott, Jang, Jongseong, Jeong, Yeonjeong, Shim, Dongsub
Recent years have seen the introduction of a range of methods for post-hoc explainability of image classifier predictions. However, these post-hoc explanations may not always be faithful to classifier predictions, which poses a significant challenge
Externí odkaz:
http://arxiv.org/abs/2105.14162
Autor:
Sattarzadeh, Sam, Sudhakar, Mahesh, Plataniotis, Konstantinos N., Jang, Jongseong, Jeong, Yeonjeong, Kim, Hyunwoo
Visualizing the features captured by Convolutional Neural Networks (CNNs) is one of the conventional approaches to interpret the predictions made by these models in numerous image recognition applications. Grad-CAM is a popular solution that provides
Externí odkaz:
http://arxiv.org/abs/2102.07805
Autor:
Sudhakar, Mahesh, Sattarzadeh, Sam, Plataniotis, Konstantinos N., Jang, Jongseong, Jeong, Yeonjeong, Kim, Hyunwoo
Explainable AI (XAI) is an active research area to interpret a neural network's decision by ensuring transparency and trust in the task-specified learned models. Recently, perturbation-based model analysis has shown better interpretation, but backpro
Externí odkaz:
http://arxiv.org/abs/2102.07799