Zobrazeno 1 - 10
of 104 636
pro vyhledávání: '"Chen, Chen"'
Autor:
CHEN CHEN
Publikováno v:
Phi Kappa Phi Forum. Spring2024, Vol. 104 Issue 1, p6-6. 1/2p.
Federated Learning has emerged as a promising paradigm for collaborative machine learning, while preserving user data privacy. Despite its potential, standard FL lacks support for diverse heterogeneous device prototypes, which vary significantly in m
Externí odkaz:
http://arxiv.org/abs/2409.18461
Active learning (AL) has achieved great success by selecting the most valuable examples from unlabeled data. However, they usually deteriorate in real scenarios where open-set noise gets involved, which is studied as open-set annotation (OSA). In thi
Externí odkaz:
http://arxiv.org/abs/2409.17607
Autor:
Gong, Xueluan, Li, Mingzhe, Zhang, Yilin, Ran, Fengyuan, Chen, Chen, Chen, Yanjiao, Wang, Qian, Lam, Kwok-Yan
Large Language Models (LLMs) have excelled in various tasks but are still vulnerable to jailbreaking attacks, where attackers create jailbreak prompts to mislead the model to produce harmful or offensive content. Current jailbreak methods either rely
Externí odkaz:
http://arxiv.org/abs/2409.14866
3D Gaussian Splatting (3DGS) has gained significant attention for its application in dense Simultaneous Localization and Mapping (SLAM), enabling real-time rendering and high-fidelity mapping. However, existing 3DGS-based SLAM methods often suffer fr
Externí odkaz:
http://arxiv.org/abs/2409.10982
Autor:
Liu, Di, Chen, Meng, Lu, Baotong, Jiang, Huiqiang, Han, Zhenhua, Zhang, Qianxi, Chen, Qi, Zhang, Chengruidong, Ding, Bailu, Zhang, Kai, Chen, Chen, Yang, Fan, Yang, Yuqing, Qiu, Lili
Transformer-based Large Language Models (LLMs) have become increasingly important. However, due to the quadratic time complexity of attention computation, scaling LLMs to longer contexts incurs extremely slow inference latency and high GPU memory con
Externí odkaz:
http://arxiv.org/abs/2409.10516
Autor:
Yang, Chao-Han Huck, Park, Taejin, Gong, Yuan, Li, Yuanchao, Chen, Zhehuai, Lin, Yen-Ting, Chen, Chen, Hu, Yuchen, Dhawan, Kunal, Żelasko, Piotr, Zhang, Chao, Chen, Yun-Nung, Tsao, Yu, Balam, Jagadeesh, Ginsburg, Boris, Siniscalchi, Sabato Marco, Chng, Eng Siong, Bell, Peter, Lai, Catherine, Watanabe, Shinji, Stolcke, Andreas
Given recent advances in generative AI technology, a key question is how large language models (LLMs) can enhance acoustic modeling tasks using text decoding results from a frozen, pretrained automatic speech recognition (ASR) model. To explore new c
Externí odkaz:
http://arxiv.org/abs/2409.09785
Autor:
Wang, Helin, Yu, Meng, Hai, Jiarui, Chen, Chen, Hu, Yuchen, Chen, Rilin, Dehak, Najim, Yu, Dong
In this paper, we introduce SSR-Speech, a neural codec autoregressive model designed for stable, safe, and robust zero-shot text-based speech editing and text-to-speech synthesis. SSR-Speech is built on a Transformer decoder and incorporates classifi
Externí odkaz:
http://arxiv.org/abs/2409.07556
Publikováno v:
ACM Transactions on Computer-Human Interaction, 2024
Providing asynchronous feedback is a critical step in the 3D design workflow. A common approach to providing feedback is to pair textual comments with companion reference images, which helps illustrate the gist of text. Ideally, feedback providers sh
Externí odkaz:
http://arxiv.org/abs/2409.06082
Transit timing variation (TTV) provides rich information about the mass and orbital properties of exoplanets, which are often obtained by solving an inverse problem via Markov Chain Monte Carlo (MCMC). In this paper, we design a new data-driven appro
Externí odkaz:
http://arxiv.org/abs/2409.04557