Zobrazeno 1 - 10
of 28
pro vyhledávání: '"Ke, Junjie"'
Autor:
Lee, Seung Hyun, Ke, Junjie, Li, Yinxiao, He, Junfeng, Hickson, Steven, Datsenko, Katie, Kim, Sangpil, Yang, Ming-Hsuan, Essa, Irfan, Yang, Feng
The goal of image cropping is to identify visually appealing crops within an image. Conventional methods rely on specialized architectures trained on specific datasets, which struggle to be adapted to new requirements. Recent breakthroughs in large v
Externí odkaz:
http://arxiv.org/abs/2408.07790
Autor:
Zhu, William Yicheng, Ye, Keren, Ke, Junjie, Yu, Jiahui, Guibas, Leonidas, Milanfar, Peyman, Yang, Feng
Recognizing and disentangling visual attributes from objects is a foundation to many computer vision applications. While large vision language representations like CLIP had largely resolved the task of zero-shot object recognition, zero-shot visual a
Externí odkaz:
http://arxiv.org/abs/2408.04102
Autor:
Oguz, Ilker, Dinc, Niyazi Ulas, Yildirim, Mustafa, Ke, Junjie, Yoo, Innfarn, Wang, Qifei, Yang, Feng, Moser, Christophe, Psaltis, Demetri
Diffusion models generate new samples by progressively decreasing the noise from the initially provided random distribution. This inference procedure generally utilizes a trained neural network numerous times to obtain the final output, creating sign
Externí odkaz:
http://arxiv.org/abs/2407.10897
Autor:
Lee, Seung Hyun, Li, Yinxiao, Ke, Junjie, Yoo, Innfarn, Zhang, Han, Yu, Jiahui, Wang, Qifei, Deng, Fei, Entis, Glenn, He, Junfeng, Li, Gang, Kim, Sangpil, Essa, Irfan, Yang, Feng
Recent works have demonstrated that using reinforcement learning (RL) with multiple quality rewards can improve the quality of generated images in text-to-image (T2I) generation. However, manually adjusting reward weights poses challenges and may cau
Externí odkaz:
http://arxiv.org/abs/2401.05675
Autor:
Liang, Youwei, He, Junfeng, Li, Gang, Li, Peizhao, Klimovskiy, Arseniy, Carolan, Nicholas, Sun, Jiao, Pont-Tuset, Jordi, Young, Sarah, Yang, Feng, Ke, Junjie, Dvijotham, Krishnamurthy Dj, Collins, Katie, Luo, Yiwen, Li, Yang, Kohlhoff, Kai J, Ramachandran, Deepak, Navalpakkam, Vidhya
Recent Text-to-Image (T2I) generation models such as Stable Diffusion and Imagen have made significant progress in generating high-resolution images based on text descriptions. However, many generated images still suffer from issues such as artifacts
Externí odkaz:
http://arxiv.org/abs/2312.10240
Autor:
Oguz, Ilker, Ke, Junjie, Wang, Qifei, Yang, Feng, Yildirim, Mustafa, Dinc, Niyazi Ulas, Hsieh, Jih-Liang, Moser, Christophe, Psaltis, Demetri
Neural networks (NN) have demonstrated remarkable capabilities in various tasks, but their computation-intensive nature demands faster and more energy-efficient hardware implementations. Optics-based platforms, using technologies such as silicon phot
Externí odkaz:
http://arxiv.org/abs/2305.19170
Assessing the aesthetics of an image is challenging, as it is influenced by multiple factors including composition, color, style, and high-level semantics. Existing image aesthetic assessment (IAA) methods primarily rely on human-labeled rating score
Externí odkaz:
http://arxiv.org/abs/2303.14302
No-reference video quality assessment (NR-VQA) for user generated content (UGC) is crucial for understanding and improving visual experience. Unlike video recognition tasks, VQA tasks are sensitive to changes in input resolution. Since large amounts
Externí odkaz:
http://arxiv.org/abs/2303.07489
Publikováno v:
In Knowledge-Based Systems 5 September 2024 299
Publikováno v:
In Neurocomputing 7 August 2024 593