Zobrazeno 1 - 10
of 10 443
pro vyhledávání: '"LinJun An"'
Publikováno v:
Scientific Reports, Vol 12, Iss 1, Pp 1-12 (2022)
Abstract The application of self-excitation is proposed to improve the efficiency of the nanoscale cutting procedure based on use of a microcantilever in atomic force microscopy. The microcantilever shape is redesigned so that it can be used to produ
Externí odkaz:
https://doaj.org/article/8b8f234d4dfc44aea65d552b5e8766bc
Autor:
Yang, Xinyu, Leng, Jixuan, Guo, Geyang, Zhao, Jiawei, Nakada, Ryumei, Zhang, Linjun, Yao, Huaxiu, Chen, Beidi
Current PEFT methods for LLMs can achieve either high quality, efficient training, or scalable serving, but not all three simultaneously. To address this limitation, we investigate sparse fine-tuning and observe a remarkable improvement in generaliza
Externí odkaz:
http://arxiv.org/abs/2412.06289
We initiate the study of differentially private learning in the proportional dimensionality regime, in which the number of data samples $n$ and problem dimension $d$ approach infinity at rates proportional to one another, meaning that $d / n \to \del
Externí odkaz:
http://arxiv.org/abs/2411.13682
The propensity of Large Language Models (LLMs) to generate hallucinations and non-factual content undermines their reliability in high-stakes domains, where rigorous control over Type I errors (the conditional probability of incorrectly classifying h
Externí odkaz:
http://arxiv.org/abs/2411.02603
Autor:
Hou, Xiaotian, Zhang, Linjun
Algorithmic fairness in machine learning has recently garnered significant attention. However, two pressing challenges remain: (1) The fairness guarantees of existing fair classification methods often rely on specific data distribution assumptions an
Externí odkaz:
http://arxiv.org/abs/2410.16477
Autor:
Xia, Peng, Zhu, Kangyu, Li, Haoran, Wang, Tianze, Shi, Weijia, Wang, Sheng, Zhang, Linjun, Zou, James, Yao, Huaxiu
Artificial Intelligence (AI) has demonstrated significant potential in healthcare, particularly in disease diagnosis and treatment planning. Recent progress in Medical Large Vision-Language Models (Med-LVLMs) has opened up new possibilities for inter
Externí odkaz:
http://arxiv.org/abs/2410.13085
Autor:
Yang, Sheng, Wu, Yurong, Gao, Yan, Zhou, Zineng, Zhu, Bin Benjamin, Sun, Xiaodi, Lou, Jian-Guang, Ding, Zhiming, Hu, Anbang, Fang, Yuan, Li, Yunsong, Chen, Junyan, Yang, Linjun
Prompt engineering is very important to enhance the performance of large language models (LLMs). When dealing with complex issues, prompt engineers tend to distill multiple patterns from examples and inject relevant solutions to optimize the prompts,
Externí odkaz:
http://arxiv.org/abs/2410.08696
Autor:
Wu, Yurong, Gao, Yan, Zhu, Bin Benjamin, Zhou, Zineng, Sun, Xiaodi, Yang, Sheng, Lou, Jian-Guang, Ding, Zhiming, Yang, Linjun
Prompt engineering is pivotal for harnessing the capabilities of large language models (LLMs) across diverse applications. While existing prompt optimization methods improve prompt effectiveness, they often lead to prompt drifting, where newly genera
Externí odkaz:
http://arxiv.org/abs/2410.08601
Autor:
Zhong, Yibo, Jiang, Haoxiang, Li, Lincan, Nakada, Ryumei, Liu, Tianci, Zhang, Linjun, Yao, Huaxiu, Wang, Haoyu
Fine-tuning pre-trained models is crucial for adapting large models to downstream tasks, often delivering state-of-the-art performance. However, fine-tuning all model parameters is resource-intensive and laborious, leading to the emergence of paramet
Externí odkaz:
http://arxiv.org/abs/2410.01870
The cost of encoding a system Hamiltonian in a digital quantum computer as a linear combination of unitaries (LCU) grows with the 1-norm of the LCU expansion. The Block Invariant Symmetry Shift (BLISS) technique reduces this 1-norm by modifying the H
Externí odkaz:
http://arxiv.org/abs/2409.18277