Zobrazeno 1 - 10
of 32
pro vyhledávání: '"Lyu, Jiancheng"'
Autor:
Zheng, Yunling, Xu, Zeyi, Xue, Fanghui, Yang, Biao, Lyu, Jiancheng, Zhang, Shuai, Qi, Yingyong, Xin, Jack
We propose and demonstrate an alternating Fourier and image domain filtering approach for feature extraction as an efficient alternative to build a vision backbone without using the computationally intensive attention. The performance among the light
Externí odkaz:
http://arxiv.org/abs/2407.12217
Continual learning (CL) has attracted increasing attention in the recent past. It aims to mimic the human ability to learn new concepts without catastrophic forgetting. While existing CL methods accomplish this to some extent, they are still prone to
Externí odkaz:
http://arxiv.org/abs/2306.08200
Autor:
Zhang, Renhong, Cheng, Tianheng, Yang, Shusheng, Jiang, Haoyi, Zhang, Shuai, Lyu, Jiancheng, Li, Xin, Ying, Xiaowen, Gao, Dashan, Liu, Wenyu, Wang, Xinggang
Video instance segmentation on mobile devices is an important yet very challenging edge AI problem. It mainly suffers from (1) heavy computation and memory costs for frame-by-frame pixel-level instance perception and (2) complicated heuristics for tr
Externí odkaz:
http://arxiv.org/abs/2303.17594
The problem of class incremental learning (CIL) is considered. State-of-the-art approaches use a dynamic architecture based on network expansion (NE), in which a task expert is added per task. While effective from a computational standpoint, these me
Externí odkaz:
http://arxiv.org/abs/2303.12696
It is expensive to compute residual diffusivity in chaotic in-compressible flows by solving advection-diffusion equation due to the formation of sharp internal layers in the advection dominated regime. Proper orthogonal decomposition (POD) is a class
Externí odkaz:
http://arxiv.org/abs/1910.00403
Autor:
Lyu, Jiancheng, Sheen, Spencer
We study channel number reduction in combination with weight binarization (1-bit weight precision) to trim a convolutional neural network for a keyword spotting (classification) task. We adopt a group-wise splitting method based on the group Lasso pe
Externí odkaz:
http://arxiv.org/abs/1909.05623
Training activation quantized neural networks involves minimizing a piecewise constant function whose gradient vanishes almost everywhere, which is undesirable for the standard back-propagation or chain rule. An empirical way around this issue is to
Externí odkaz:
http://arxiv.org/abs/1903.05662
ShuffleNet is a state-of-the-art light weight convolutional neural network architecture. Its basic operations include group, channel-wise convolution and channel shuffling. However, channel shuffling is manually designed empirically. Mathematically,
Externí odkaz:
http://arxiv.org/abs/1901.08624
Autor:
Sheen, Spencer, Lyu, Jiancheng
We propose and study a new projection formula for training binary weight convolutional neural networks. The projection formula measures the error in approximating a full precision (32 bit) vector by a 1-bit vector in the l_1 norm instead of the stand
Externí odkaz:
http://arxiv.org/abs/1811.02784
Quantized deep neural networks (QDNNs) are attractive due to their much lower memory storage and faster inference speed than their regular full precision counterparts. To maintain the same performance level especially at low bit-widths, QDNNs must be
Externí odkaz:
http://arxiv.org/abs/1808.05240