Zobrazeno 1 - 10
of 10
pro vyhledávání: '"Lian, Xiaochen"'
The semantic segmentation of nighttime scenes is a challenging problem that is key to impactful applications like self-driving cars. Yet, it has received little attention compared to its daytime counterpart. In this paper, we propose NightLab, a nove
Externí odkaz:
http://arxiv.org/abs/2204.05538
High-resolution representations (HR) are essential for dense prediction tasks such as segmentation, detection, and pose estimation. Learning HR representations is typically ignored in previous Neural Architecture Search (NAS) methods that focus on im
Externí odkaz:
http://arxiv.org/abs/2106.06560
Autor:
Zhou, Daquan, Kang, Bingyi, Jin, Xiaojie, Yang, Linjie, Lian, Xiaochen, Jiang, Zihang, Hou, Qibin, Feng, Jiashi
Vision transformers (ViTs) have been successfully applied in image classification tasks recently. In this paper, we show that, unlike convolution neural networks (CNNs)that can be improved by stacking more convolutional layers, the performance of ViT
Externí odkaz:
http://arxiv.org/abs/2103.11886
Autor:
Zhou, Daquan, Jin, Xiaojie, Lian, Xiaochen, Yang, Linjie, Xue, Yujing, Hou, Qibin, Feng, Jiashi
Current neural architecture search (NAS) algorithms still require expert knowledge and effort to design a search space for network construction. In this paper, we consider automating the search space design to minimize human interference, which howev
Externí odkaz:
http://arxiv.org/abs/2103.11833
Autor:
Li, Yingwei, Jin, Xiaojie, Mei, Jieru, Lian, Xiaochen, Yang, Linjie, Xie, Cihang, Yu, Qihang, Zhou, Yuyin, Bai, Song, Yuille, Alan
Non-Local (NL) blocks have been widely studied in various vision tasks. However, it has been rarely explored to embed the NL blocks in mobile neural networks, mainly due to the following challenges: 1) NL blocks generally have heavy computation cost
Externí odkaz:
http://arxiv.org/abs/2004.01961
Autor:
Mei, Jieru, Li, Yingwei, Lian, Xiaochen, Jin, Xiaojie, Yang, Linjie, Yuille, Alan, Yang, Jianchao
Search space design is very critical to neural architecture search (NAS) algorithms. We propose a fine-grained search space comprised of atomic blocks, a minimal search unit that is much smaller than the ones used in recent NAS algorithms. This searc
Externí odkaz:
http://arxiv.org/abs/1912.09640
Recently there has been a rising interest in training agents, embodied in virtual environments, to perform language-directed tasks by deep reinforcement learning. In this paper, we propose a simple but effective neural language grounding module for e
Externí odkaz:
http://arxiv.org/abs/1805.08329
This paper addresses the problem of semantic part parsing (segmentation) of cars, i.e.assigning every pixel within the car to one of the parts (e.g.body, window, lights, license plates and wheels). We formulate this as a landmark identification probl
Externí odkaz:
http://arxiv.org/abs/1406.2375
Autor:
Lian, Xiaochen
Publikováno v:
Lian, Xiaochen. (2017). Video and Image Analysis Using Local Information. UCLA: Statistics 0891. Retrieved from: http://www.escholarship.org/uc/item/1rx480pt
Local information is very crucial in many image and video analysis tasks. In this thesis,we introduce four representative works in exploiting local information. We first introduce aset of per-pixel labeling datasets, which provide a good platform for
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=od_______325::f9227340c567e4a0cdcdc9c9345d3c77
http://www.escholarship.org/uc/item/1rx480pt
http://www.escholarship.org/uc/item/1rx480pt
Autor:
Lian, Xiaochen
Publikováno v:
Lian, Xiaochen. (2016). Mining Spatial and Spatio-Temporal ROIs for Action Recognition. UCLA: Statistics 0891. Retrieved from: http://www.escholarship.org/uc/item/9gp7w2h3
In this paper, we propose an approach to classify action sequences. We observe that in action sequences the critical features for discriminating between actions occur only within sub-regions of the image. Hence deep network approaches will address th
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=od_______325::b6d667561efc5e5a892a26fe1a9225da
http://n2t.net/ark:/13030/m5pg6d0n
http://n2t.net/ark:/13030/m5pg6d0n