SepLLM: Accelerate Large Language Models by Compressing One Segment into One Separator
Autor: | Chen, Guoxuan, Shi, Han, Li, Jiawei, Gao, Yihang, Ren, Xiaozhe, Chen, Yimeng, Jiang, Xin, Li, Zhenguo, Liu, Weiyang, Huang, Chao |
---|---|
Rok vydání: | 2024 |
Předmět: | |
Druh dokumentu: | Working Paper |
Popis: | Large Language Models (LLMs) have exhibited exceptional performance across a spectrum of natural language processing tasks. However, their substantial sizes pose considerable challenges, particularly in computational demands and inference speed, due to their quadratic complexity. In this work, we have identified a key pattern: certain seemingly meaningless special tokens (i.e., separators) contribute disproportionately to attention scores compared to semantically meaningful tokens. This observation suggests that information of the segments between these separator tokens can be effectively condensed into the separator tokens themselves without significant information loss. Guided by this insight, we introduce SepLLM, a plug-and-play framework that accelerates inference by compressing these segments and eliminating redundant tokens. Additionally, we implement efficient kernels for training acceleration. Experimental results across training-free, training-from-scratch, and post-training settings demonstrate SepLLM's effectiveness. Notably, using the Llama-3-8B backbone, SepLLM achieves over 50% reduction in KV cache on the GSM8K-CoT benchmark while maintaining comparable performance. Furthermore, in streaming settings, SepLLM effectively processes sequences of up to 4 million tokens or more while maintaining consistent language modeling capabilities. Comment: We have made our code publicly available at sepllm.github.io. Our codebase supports efficient multi-node distributed training with accelerated attention module Sep-Attention and also supports numerous existing Fusion Operators to accelerate the training process, such as fused rope, etc. If you find our code helpful, please kindly consider giving us a **star** on GitHub^_^. Thank you very much! |
Databáze: | arXiv |
Externí odkaz: |