Zobrazeno 1 - 10
of 2 953
pro vyhledávání: '"Bae, Sung A."'
3D models have recently been popularized by the potentiality of end-to-end training offered first by Neural Radiance Fields and most recently by 3D Gaussian Splatting models. The latter has the big advantage of naturally providing fast training conve
Externí odkaz:
http://arxiv.org/abs/2410.23213
In recent times, the utilization of 3D models has gained traction, owing to the capacity for end-to-end training initially offered by Neural Radiance Fields and more recently by 3D Gaussian Splatting (3DGS) models. The latter holds a significant adva
Externí odkaz:
http://arxiv.org/abs/2406.18214
Autor:
Ha, Hyunwoo, Hyun-Bin, Oh, Jun-Seong, Kim, Byung-Ki, Kwon, Sung-Bin, Kim, Tran, Linh-Tam, Kim, Ji-Yun, Bae, Sung-Ho, Oh, Tae-Hyun
Video motion magnification is a technique to capture and amplify subtle motion in a video that is invisible to the naked eye. The deep learning-based prior work successfully demonstrates the modelling of the motion magnification problem with outstand
Externí odkaz:
http://arxiv.org/abs/2403.01898
Autor:
Cha, Junghun, Haider, Ali, Yang, Seoyun, Jin, Hoeyeong, Yang, Subin, Uddin, A. F. M. Shahab, Kim, Jaehyoung, Kim, Soo Ye, Bae, Sung-Ho
A significant volume of analog information, i.e., documents and images, have been digitized in the form of scanned copies for storing, sharing, and/or analyzing in the digital world. However, the quality of such contents is severely degraded by vario
Externí odkaz:
http://arxiv.org/abs/2402.05350
Binary neural networks (BNNs) have been widely adopted to reduce the computational cost and memory storage on edge-computing devices by using one-bit representation for activations and weights. However, as neural networks become wider/deeper to impro
Externí odkaz:
http://arxiv.org/abs/2308.13735
Autor:
Zhang, Chaoning, Han, Dongshen, Qiao, Yu, Kim, Jung Uk, Bae, Sung-Ho, Lee, Seungkyu, Hong, Choong Seon
Segment Anything Model (SAM) has attracted significant attention due to its impressive zero-shot transfer performance and high versatility for numerous vision applications (like image editing with fine-grained control). Many of such applications need
Externí odkaz:
http://arxiv.org/abs/2306.14289
Autor:
Zhang, Chaoning, Cho, Joseph, Puspitasari, Fachrina Dewi, Zheng, Sheng, Li, Chenghao, Qiao, Yu, Kang, Taegoo, Shan, Xinru, Zhang, Chenshuang, Qin, Caiyan, Rameau, Francois, Lee, Lik-Hang, Bae, Sung-Ho, Hong, Choong Seon
The Segment Anything Model (SAM), developed by Meta AI Research, represents a significant breakthrough in computer vision, offering a robust framework for image and video segmentation. This survey provides a comprehensive exploration of the SAM famil
Externí odkaz:
http://arxiv.org/abs/2306.06211
Autor:
Li, Chenghao, Zhang, Chaoning, Cho, Joseph, Waghwase, Atish, Lee, Lik-Hang, Rameau, Francois, Yang, Yang, Bae, Sung-Ho, Hong, Choong Seon
Generative AI has made significant progress in recent years, with text-guided content generation being the most practical as it facilitates interaction between human instructions and AI-generated content (AIGC). Thanks to advancements in text-to-imag
Externí odkaz:
http://arxiv.org/abs/2305.06131
Segment Anything Model (SAM) has attracted significant attention recently, due to its impressive performance on various downstream tasks in a zero-short manner. Computer vision (CV) area might follow the natural language processing (NLP) area to emba
Externí odkaz:
http://arxiv.org/abs/2305.00866
Autor:
Han, Dongsheng, Zhang, Chaoning, Qiao, Yu, Qamar, Maryam, Jung, Yuna, Lee, SeungKyu, Bae, Sung-Ho, Hong, Choong Seon
Meta AI Research has recently released SAM (Segment Anything Model) which is trained on a large segmentation dataset of over 1 billion masks. As a foundation model in the field of computer vision, SAM (Segment Anything Model) has gained attention for
Externí odkaz:
http://arxiv.org/abs/2305.00278