Zobrazeno 1 - 1
of 1
pro vyhledávání: '"Rajagopalan, Santosh"'
Autor:
He, Xuanli, Keivanloo, Iman, Xu, Yi, He, Xiang, Zeng, Belinda, Rajagopalan, Santosh, Chilimbi, Trishul
Pre-training and then fine-tuning large language models is commonly used to achieve state-of-the-art performance in natural language processing (NLP) tasks. However, most pre-trained models suffer from low inference speed. Deploying such large models
Externí odkaz:
http://arxiv.org/abs/2111.00230