Lyra: An Efficient and Speech-Centric Framework for Omni-Cognition

Autor: Zhong, Zhisheng, Wang, Chengyao, Liu, Yuqi, Yang, Senqiao, Tang, Longxiang, Zhang, Yuechen, Li, Jingyao, Qu, Tianyuan, Li, Yanwei, Chen, Yukang, Yu, Shaozuo, Wu, Sitong, Lo, Eric, Liu, Shu, Jia, Jiaya
Rok vydání: 2024
Předmět:
Druh dokumentu: Working Paper
Popis: As Multi-modal Large Language Models (MLLMs) evolve, expanding beyond single-domain capabilities is essential to meet the demands for more versatile and efficient AI. However, previous omni-models have insufficiently explored speech, neglecting its integration with multi-modality. We introduce Lyra, an efficient MLLM that enhances multimodal abilities, including advanced long-speech comprehension, sound understanding, cross-modality efficiency, and seamless speech interaction. To achieve efficiency and speech-centric capabilities, Lyra employs three strategies: (1) leveraging existing open-source large models and a proposed multi-modality LoRA to reduce training costs and data requirements; (2) using a latent multi-modality regularizer and extractor to strengthen the relationship between speech and other modalities, thereby enhancing model performance; and (3) constructing a high-quality, extensive dataset that includes 1.5M multi-modal (language, vision, audio) data samples and 12K long speech samples, enabling Lyra to handle complex long speech inputs and achieve more robust omni-cognition. Compared to other omni-methods, Lyra achieves state-of-the-art performance on various vision-language, vision-speech, and speech-language benchmarks, while also using fewer computational resources and less training data.
Comment: Tech report
Databáze: arXiv