Zobrazeno 1 - 10
of 122
pro vyhledávání: '"Bai, Yu"'
The doping evolution behaviors of the normal state magnetic excitations (MEs) of the nickelate La3Ni2O7 are theoretically studied in this paper. For a filling of n = 3.0 which corresponds roughly to the material which realizes the superconductivity o
Externí odkaz:
http://arxiv.org/abs/2408.03763
In the field of model compression, choosing an appropriate rank for tensor decomposition is pivotal for balancing model compression rate and efficiency. However, this selection, whether done manually or through optimization-based automatic methods, o
Externí odkaz:
http://arxiv.org/abs/2408.01534
Autor:
Zhao, Fuzheng, Bai, Yu
This study aims to design and implement a laughter recognition system based on multimodal fusion and deep learning, leveraging image and audio processing technologies to achieve accurate laughter recognition and emotion analysis. First, the system lo
Externí odkaz:
http://arxiv.org/abs/2407.21391
Multi-label few-shot aspect category detection aims at identifying multiple aspect categories from sentences with a limited number of training instances. The representation of sentences and categories is a key issue in this task. Most of current meth
Externí odkaz:
http://arxiv.org/abs/2407.20673
Autor:
Wu, Siwei, Zhu, Kang, Bai, Yu, Liang, Yiming, Li, Yizhi, Wu, Haoning, Liu, J. H., Liu, Ruibo, Qu, Xingwei, Cheng, Xuxin, Zhang, Ge, Huang, Wenhao, Lin, Chenghua
Given the remarkable success that large visual language models (LVLMs) have achieved in image perception tasks, the endeavor to make LVLMs perceive the world like humans is drawing increasing attention. Current multi-modal benchmarks primarily focus
Externí odkaz:
http://arxiv.org/abs/2407.17379
In Greek mythology, Pistis symbolized good faith, trust, and reliability. Drawing inspiration from these principles, Pistis-RAG is a scalable multi-stage framework designed to address the challenges of large-scale retrieval-augmented generation (RAG)
Externí odkaz:
http://arxiv.org/abs/2407.00072
Recent studies have demonstrated that In-Context Learning (ICL), through the use of specific demonstrations, can align Large Language Models (LLMs) with human preferences known as In-Context Alignment (ICA), indicating that models can comprehend huma
Externí odkaz:
http://arxiv.org/abs/2406.11474
Autor:
Bai, Yu, Zou, Xiyuan, Huang, Heyan, Chen, Sanxing, Rondeau, Marc-Antoine, Gao, Yang, Cheung, Jackie Chi Kit
Long sequence modeling has gained broad interest as large language models (LLMs) continue to advance. Recent research has identified that a large portion of hidden states within the key-value caches of Transformer models can be discarded (also termed
Externí odkaz:
http://arxiv.org/abs/2406.12018
Autor:
Wang, Tengbo, Bai, Yu
How to extract instance-level masks without instance-level supervision is the main challenge of weakly supervised instance segmentation (WSIS). Popular WSIS methods estimate a displacement field (DF) via learning inter-pixel relations and perform clu
Externí odkaz:
http://arxiv.org/abs/2406.18558
Autor:
Wei, Yukang, Bai, Yu
Temperature plays a pivotal role in moderating label softness in the realm of knowledge distillation (KD). Traditional approaches often employ a static temperature throughout the KD process, which fails to address the nuanced complexities of samples
Externí odkaz:
http://arxiv.org/abs/2404.12711