Zobrazeno 1 - 10
of 104
pro vyhledávání: '"Arora, Aditya"'
Autor:
Gupta, Akshita, Arora, Aditya, Narayan, Sanath, Khan, Salman, Khan, Fahad Shahbaz, Taylor, Graham W.
Open-Vocabulary Temporal Action Localization (OVTAL) enables a model to recognize any desired action category in videos without the need to explicitly curate training data for all categories. However, this flexibility poses significant challenges, as
Externí odkaz:
http://arxiv.org/abs/2406.15556
Autor:
Li, Yawei, Zhang, Kai, Timofte, Radu, Van Gool, Luc, Kong, Fangyuan, Li, Mingxi, Liu, Songwei, Du, Zongcai, Liu, Ding, Zhou, Chenhui, Chen, Jingyi, Han, Qingrui, Li, Zheyuan, Liu, Yingqi, Chen, Xiangyu, Cai, Haoming, Qiao, Yu, Dong, Chao, Sun, Long, Pan, Jinshan, Zhu, Yi, Zong, Zhikai, Liu, Xiaoxiao, Hui, Zheng, Yang, Tao, Ren, Peiran, Xie, Xuansong, Hua, Xian-Sheng, Wang, Yanbo, Ji, Xiaozhong, Lin, Chuming, Luo, Donghao, Tai, Ying, Wang, Chengjie, Zhang, Zhizhong, Xie, Yuan, Cheng, Shen, Luo, Ziwei, Yu, Lei, Wen, Zhihong, Wu1, Qi, Li, Youwei, Fan, Haoqiang, Sun, Jian, Liu, Shuaicheng, Huang, Yuanfei, Jin, Meiguang, Huang, Hua, Liu, Jing, Zhang, Xinjian, Wang, Yan, Long, Lingshun, Li, Gen, Zhang, Yuanfan, Cao, Zuowei, Sun, Lei, Alexander, Panaetov, Wang, Yucong, Cai, Minjie, Wang, Li, Tian, Lu, Wang, Zheyuan, Ma, Hongbing, Liu, Jie, Chen, Chao, Cai, Yidong, Tang, Jie, Wu, Gangshan, Wang, Weiran, Huang, Shirui, Lu, Honglei, Liu, Huan, Wang, Keyan, Chen, Jun, Chen, Shi, Miao, Yuchun, Huang, Zimo, Zhang, Lefei, Ayazoğlu, Mustafa, Xiong, Wei, Xiong, Chengyi, Wang, Fei, Li, Hao, Wen, Ruimian, Yang, Zhijing, Zou, Wenbin, Zheng, Weixin, Ye, Tian, Zhang, Yuncheng, Kong, Xiangzhen, Arora, Aditya, Zamir, Syed Waqas, Khan, Salman, Hayat, Munawar, Khan, Fahad Shahbaz, Ning, Dandan Gaoand Dengwen Zhouand Qian, Tang, Jingzhu, Huang, Han, Wang, Yufei, Peng, Zhangheng, Li, Haobo, Guan, Wenxue, Gong, Shenghua, Li, Xin, Liu, Jun, Wang, Wanjun, Zhou, Dengwen, Zeng, Kun, Lin, Hanjiang, Chen, Xinyu, Fang, Jinsheng
This paper reviews the NTIRE 2022 challenge on efficient single image super-resolution with focus on the proposed solutions and results. The task of the challenge was to super-resolve an input image with a magnification factor of $\times$4 based on p
Externí odkaz:
http://arxiv.org/abs/2205.05675
Autor:
Zamir, Syed Waqas, Arora, Aditya, Khan, Salman, Hayat, Munawar, Khan, Fahad Shahbaz, Yang, Ming-Hsuan, Shao, Ling
Given a degraded input image, image restoration aims to recover the missing high-quality image content. Numerous applications demand effective image restoration, e.g., computational photography, surveillance, autonomous vehicles, and remote sensing.
Externí odkaz:
http://arxiv.org/abs/2205.01649
Autor:
Pansare, Niketan, Katukuri, Jay, Arora, Aditya, Cipollone, Frank, Shaik, Riyaaz, Tokgozoglu, Noyan, Venkataraman, Chandru
In deep learning, embeddings are widely used to represent categorical entities such as words, apps, and movies. An embedding layer maps each entity to a unique vector, causing the layer's memory requirement to be proportional to the number of entitie
Externí odkaz:
http://arxiv.org/abs/2203.10135
One-shot learning has become an important research topic in the last decade with many real-world applications. The goal of one-shot learning is to classify unlabeled instances when there is only one labeled example per class. Conventional problem set
Externí odkaz:
http://arxiv.org/abs/2201.09202
Autor:
Zamir, Syed Waqas, Arora, Aditya, Khan, Salman, Hayat, Munawar, Khan, Fahad Shahbaz, Yang, Ming-Hsuan
Since convolutional neural networks (CNNs) perform well at learning generalizable image priors from large-scale data, these models have been extensively applied to image restoration and related tasks. Recently, another class of neural architectures,
Externí odkaz:
http://arxiv.org/abs/2111.09881
Autor:
Zamir, Syed Waqas, Arora, Aditya, Khan, Salman, Hayat, Munawar, Khan, Fahad Shahbaz, Yang, Ming-Hsuan, Shao, Ling
Image restoration tasks demand a complex balance between spatial details and high-level contextualized information while recovering images. In this paper, we propose a novel synergistic design that can optimally balance these competing goals. Our mai
Externí odkaz:
http://arxiv.org/abs/2102.02808
Autor:
Arora, Aditya, Haris, Muhammad, Zamir, Syed Waqas, Hayat, Munawar, Khan, Fahad Shahbaz, Shao, Ling, Yang, Ming-Hsuan
Images captured under low-light conditions manifest poor visibility, lack contrast and color vividness. Compared to conventional approaches, deep convolutional neural networks (CNNs) perform well in enhancing images. However, being solely reliant on
Externí odkaz:
http://arxiv.org/abs/2101.00850
Distance metric learning has attracted much attention in recent years, where the goal is to learn a distance metric based on user feedback. Conventional approaches to metric learning mainly focus on learning the Mahalanobis distance metric on data at
Externí odkaz:
http://arxiv.org/abs/2011.04062
Autor:
Wei, Pengxu, Lu, Hannan, Timofte, Radu, Lin, Liang, Zuo, Wangmeng, Pan, Zhihong, Li, Baopu, Xi, Teng, Fan, Yanwen, Zhang, Gang, Liu, Jingtuo, Han, Junyu, Ding, Errui, Xie, Tangxin, Cao, Liang, Zou, Yan, Shen, Yi, Zhang, Jialiang, Jia, Yu, Cheng, Kaihua, Wu, Chenhuan, Lin, Yue, Liu, Cen, Peng, Yunbo, Zou, Xueyi, Luo, Zhipeng, Yao, Yuehan, Xu, Zhenyu, Zamir, Syed Waqas, Arora, Aditya, Khan, Salman, Hayat, Munawar, Khan, Fahad Shahbaz, Ahn, Keon-Hee, Kim, Jun-Hyuk, Choi, Jun-Ho, Lee, Jong-Seok, Zhao, Tongtong, Zhao, Shanshan, Han, Yoseob, Kim, Byung-Hoon, Baek, JaeHyun, Wu, Haoning, Xu, Dejia, Zhou, Bo, Guan, Wei, Li, Xiaobo, Ye, Chen, Li, Hao, Zhong, Haoyu, Shi, Yukai, Yang, Zhijing, Yang, Xiaojun, Li, Xin, Jin, Xin, Wu, Yaojun, Pang, Yingxue, Liu, Sen, Liu, Zhi-Song, Wang, Li-Wen, Li, Chu-Tak, Cani, Marie-Paule, Siu, Wan-Chi, Zhou, Yuanbo, Umer, Rao Muhammad, Micheloni, Christian, Cong, Xiaofeng, Gupta, Rajat, Almasri, Feras, Vandamme, Thomas, Debeir, Olivier
Publikováno v:
European Conference on Computer Vision Workshops, 2020
This paper introduces the real image Super-Resolution (SR) challenge that was part of the Advances in Image Manipulation (AIM) workshop, held in conjunction with ECCV 2020. This challenge involves three tracks to super-resolve an input image for $\ti
Externí odkaz:
http://arxiv.org/abs/2009.12072