Zobrazeno 1 - 10
of 18
pro vyhledávání: '"Hou, Zejiang"'
Autor:
Peri, Raghuveer, Jayanthi, Sai Muralidhar, Ronanki, Srikanth, Bhatia, Anshu, Mundnich, Karel, Dingliwal, Saket, Das, Nilaksh, Hou, Zejiang, Huybrechts, Goeric, Vishnubhotla, Srikanth, Garcia-Romero, Daniel, Srinivasan, Sundararajan, Han, Kyu J, Kirchhoff, Katrin
Integrated Speech and Large Language Models (SLMs) that can follow speech instructions and generate relevant text responses have gained popularity lately. However, the safety and robustness of these models remains largely unclear. In this work, we in
Externí odkaz:
http://arxiv.org/abs/2405.08317
Self-attention based transformer models have been dominating many computer vision tasks in the past few years. Their superb model qualities heavily depend on the excessively large labeled image datasets. In order to reduce the reliance on large label
Externí odkaz:
http://arxiv.org/abs/2208.06049
Large pretrained language models (PLMs) are often domain- or task-adapted via fine-tuning or prompting. Finetuning requires modifying all of the parameters and having enough data to avoid overfitting while prompting requires no training and few examp
Externí odkaz:
http://arxiv.org/abs/2207.03509
Autor:
Hou, Zejiang, Qin, Minghai, Sun, Fei, Ma, Xiaolong, Yuan, Kun, Xu, Yi, Chen, Yen-Kuang, Jin, Rong, Xie, Yuan, Kung, Sun-Yuan
Channel pruning has been broadly recognized as an effective technique to reduce the computation and memory cost of deep convolutional neural networks. However, conventional pruning methods have limitations in that: they are restricted to pruning proc
Externí odkaz:
http://arxiv.org/abs/2203.15794
Autor:
Hou, Zejiang, Kung, Sun-Yuan
Vision transformers (ViT) have recently attracted considerable attentions, but the huge computational cost remains an issue for practical deployment. Previous ViT pruning methods tend to prune the model along one dimension solely, which may suffer fr
Externí odkaz:
http://arxiv.org/abs/2201.00043
Autor:
Hou, Zejiang, Kung, Sun-Yuan
We study the few-shot learning (FSL) problem, where a model learns to recognize new objects with extremely few labeled training data per category. Most of previous FSL approaches resort to the meta-learning paradigm, where the model accumulates induc
Externí odkaz:
http://arxiv.org/abs/2109.02820
Autor:
Ma, Xiaolong, Qin, Minghai, Sun, Fei, Hou, Zejiang, Yuan, Kun, Xu, Yi, Wang, Yanzhi, Chen, Yen-Kuang, Jin, Rong, Xie, Yuan
Deep neural networks (DNNs) are effective in solving many real-world problems. Larger DNN models usually exhibit better quality (e.g., accuracy) but their excessive computation results in long inference time. Model sparsification can reduce the compu
Externí odkaz:
http://arxiv.org/abs/2106.09857
Autor:
Hou, Zejiang, Kung, Sun-Yuan
Network pruning has become the de facto tool to accelerate deep neural networks for mobile and edge applications. Recently, feature-map discriminant based channel pruning has shown promising results, as it aligns well with the CNN objective of differ
Externí odkaz:
http://arxiv.org/abs/2005.13796
Kernel approximation methods create explicit, low-dimensional kernel feature maps to deal with the high computational and memory complexity of standard techniques. This work studies a supervised kernel learning methodology to optimize such mappings.
Externí odkaz:
http://arxiv.org/abs/1909.10432
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.