Zobrazeno 1 - 9
of 9
pro vyhledávání: '"Yang, Hunmin"'
Recent vision-language foundation models, such as CLIP, have demonstrated superior capabilities in learning representations that can be transferable across diverse range of downstream tasks and domains. With the emergence of such powerful models, it
Externí odkaz:
http://arxiv.org/abs/2407.20657
Deep neural networks are known to be vulnerable to security risks due to the inherent transferable nature of adversarial examples. Despite the success of recent generative model-based attacks demonstrating strong transferability, it still remains a c
Externí odkaz:
http://arxiv.org/abs/2407.20653
One of the biggest challenges in single-view 3D shape reconstruction in the wild is the scarcity of <3D shape, 2D image>-paired data from real-world environments. Inspired by remarkable achievements via domain randomization, we propose ObjectDR which
Externí odkaz:
http://arxiv.org/abs/2403.14539
Autor:
Suryanto, Naufal, Kim, Yongsu, Larasati, Harashta Tatimma, Kang, Hyoeun, Le, Thi-Thu-Huong, Hong, Yoonyoung, Yang, Hunmin, Oh, Se-Yoon, Kim, Howon
Adversarial camouflage has garnered attention for its ability to attack object detectors from any viewpoint by covering the entire object's surface. However, universality and robustness in existing methods often fall short as the transferability aspe
Externí odkaz:
http://arxiv.org/abs/2308.07009
In a joint vision-language space, a text feature (e.g., from "a photo of a dog") could effectively represent its relevant image features (e.g., from dog photos). Also, a recent study has demonstrated the cross-modal transferability phenomenon of this
Externí odkaz:
http://arxiv.org/abs/2307.15199
Autor:
Suryanto, Naufal, Kim, Yongsu, Kang, Hyoeun, Larasati, Harashta Tatimma, Yun, Youngyeo, Le, Thi-Thu-Huong, Yang, Hunmin, Oh, Se-Yoon, Kim, Howon
To perform adversarial attacks in the physical world, many studies have proposed adversarial camouflage, a method to hide a target object by applying camouflage patterns on 3D object surfaces. For obtaining optimal physical adversarial camouflage, pr
Externí odkaz:
http://arxiv.org/abs/2203.09831
Publikováno v:
2020 20th International Conference on Control, Automation and Systems (ICCAS).
In tandem with growing deep learning technology, vehicle detection using convolutional neural network is now become a mainstream in the field of autonomous driving and ADAS. Taking advantage of this, lots of real image datasets have been produced in
Publikováno v:
2020 20th International Conference on Control, Automation and Systems (ICCAS).
Object detection is one of the main task for the deep learning applications. Deep learning performance has already exceeded human’s detection ability, in the case when there are lots of data for training deep neural networks. In the case of militar
Publikováno v:
2020 20th International Conference on Control, Automation and Systems (ICCAS).
Deep neural networks tend to be erroneous when the training and test distribution differ. Especially, neural classifiers are brittle to adversarial examples, and highly overconfident to out-of-distribution examples. Hybrid modeling of generative and