Zobrazeno 1 - 10
of 626
pro vyhledávání: '"ZHANG, Jingyang"'
Autor:
Lin, Yueqian, Fu, Yuzhe, Zhang, Jingyang, Liu, Yudong, Zhang, Jianyi, Sun, Jingwei, Li, Hai "Helen", Chen, Yiran
We introduce Speech Information Retrieval (SIR), a new long-context task for Speech Large Language Models (Speech LLMs), and present SPIRAL, a 1,012-sample benchmark testing models' ability to extract critical details from approximately 90-second spo
Externí odkaz:
http://arxiv.org/abs/2412.12009
Model merging is an emerging technique that integrates multiple models fine-tuned on different tasks to create a versatile model that excels in multiple domains. This scheme, in the meantime, may open up backdoor attack opportunities where one single
Externí odkaz:
http://arxiv.org/abs/2411.16746
Autor:
Guo, Cong, Cheng, Feng, Du, Zhixu, Kiessling, James, Ku, Jonathan, Li, Shiyu, Li, Ziru, Ma, Mingyuan, Molom-Ochir, Tergel, Morris, Benjamin, Shan, Haoxuan, Sun, Jingwei, Wang, Yitu, Wei, Chiyue, Wu, Xueying, Wu, Yuhao, Yang, Hao Frank, Zhang, Jingyang, Zhang, Junyao, Zheng, Qilin, Zhou, Guanglei, Hai, Li, Chen, Yiran
The rapid development of large language models (LLMs) has significantly transformed the field of artificial intelligence, demonstrating remarkable capabilities in natural language processing and moving towards multi-modal functionality. These models
Externí odkaz:
http://arxiv.org/abs/2410.07265
Autor:
Tang, Minxue, Wang, Yitu, Zhang, Jingyang, DiValentin, Louis, Ding, Aolin, Hass, Amin, Chen, Yiran, Li, Hai "Helen"
Federated Learning (FL) provides a strong privacy guarantee by enabling local training across edge devices without training data sharing, and Federated Adversarial Training (FAT) further enhances the robustness against adversarial examples, promoting
Externí odkaz:
http://arxiv.org/abs/2409.08372
Adversarial training enhances neural network robustness but suffers from a tendency to overfit and increased generalization errors on clean data. This work introduces CLAT, an innovative approach that mitigates adversarial overfitting by introducing
Externí odkaz:
http://arxiv.org/abs/2408.10204
Autor:
Miyai, Atsuyuki, Yang, Jingkang, Zhang, Jingyang, Ming, Yifei, Lin, Yueqian, Yu, Qing, Irie, Go, Joty, Shafiq, Li, Yixuan, Li, Hai, Liu, Ziwei, Yamasaki, Toshihiko, Aizawa, Kiyoharu
Detecting out-of-distribution (OOD) samples is crucial for ensuring the safety of machine learning systems and has shaped the field of OOD detection. Meanwhile, several other problems are closely related to OOD detection, including anomaly detection
Externí odkaz:
http://arxiv.org/abs/2407.21794
Generalist segmentation models are increasingly favored for diverse tasks involving various objects from different image sources. Task-Incremental Learning (TIL) offers a privacy-preserving training paradigm using tasks arriving sequentially, instead
Externí odkaz:
http://arxiv.org/abs/2406.19796
The ability to learn sequentially from different data sites is crucial for a deep network in solving practical medical image diagnosis problems due to privacy restrictions and storage limitations. However, adapting on incoming site leads to catastrop
Externí odkaz:
http://arxiv.org/abs/2406.18037
Autor:
Inkawhich, Matthew, Inkawhich, Nathan, Yang, Hao, Zhang, Jingyang, Linderman, Randolph, Chen, Yiran
An object detector's ability to detect and flag \textit{novel} objects during open-world deployments is critical for many real-world applications. Unfortunately, much of the work in open object detection today is disjointed and fails to adequately ad
Externí odkaz:
http://arxiv.org/abs/2404.10865
Autor:
Zhang, Jingyang, Sun, Jingwei, Yeats, Eric, Ouyang, Yang, Kuo, Martin, Zhang, Jianyi, Yang, Hao Frank, Li, Hai
The problem of pre-training data detection for large language models (LLMs) has received growing attention due to its implications in critical issues like copyright violation and test data contamination. Despite improved performance, existing methods
Externí odkaz:
http://arxiv.org/abs/2404.02936