Autor: |
Hu, Bodun, Li, Jiamin, Xu, Le, Lee, Myungjin, Jajoo, Akshay, Kim, Geon-Woo, Xu, Hong, Akella, Aditya |
Rok vydání: |
2024 |
Předmět: |
|
Druh dokumentu: |
Working Paper |
Popis: |
The increasing demand for Large Language Models (LLMs) across various applications has led to a significant shift in the design of deep learning serving systems. Deploying LLMs, particularly in multi-tenant environments, poses substantial challenges due to their high computational and memory demands. We introduce BlockLLM, a serving system that leverages component sharing among fine-tuned LLM models to provide an efficient and flexible solution for LLM workloads. BlockLLM partitions models into finer-grained blocks, enabling the reuse of model components and independent provisioning to improve computation efficiency. BlockLLM comprises an offline block zoo for storing blocks and an online system to serve requests through chains of blocks. It offers multi-fold flexibilities: (1) Adaptive assembly of blocks on-the-fly through equivalence evaluation among blocks in the zoo; (2) Per-block batch size configuration and best-effort KV cache coordination at the individual block level; (3) Speculative execution and locality-aware block placement to reduce communication costs from dynamic block resource allocation. Our evaluation shows that BlockLLM reduces memory and storage footprints and improves computational efficiency, outperforming existing serving approach in 95%ile latency and GPU utilization by 33.5% and 20.1%, respectively, with minimal impact on accuracy |
Databáze: |
arXiv |
Externí odkaz: |
|