Zobrazeno 1 - 10
of 1 941
pro vyhledávání: '"WANG, MINGYANG"'
We study the complexity of deterministic and probabilistic inversions of partial computable functions on the reals.
Externí odkaz:
http://arxiv.org/abs/2412.07592
Autor:
Wang, Shaobo, Tang, Hongxuan, Wang, Mingyang, Zhang, Hongrui, Liu, Xuyang, Li, Weiya, Hu, Xuming, Zhang, Linfeng
The debate between self-interpretable models and post-hoc explanations for black-box models is central to Explainable AI (XAI). Self-interpretable models, such as concept-based networks, offer insights by connecting decisions to human-understandable
Externí odkaz:
http://arxiv.org/abs/2410.21815
To ensure large language models contain up-to-date knowledge, they need to be updated regularly. However, model editing is challenging as it might also affect knowledge that is unrelated to the new data. State-of-the-art methods identify parameters a
Externí odkaz:
http://arxiv.org/abs/2410.02433
Recent multilingual pretrained language models (mPLMs) often avoid using language embeddings -- learnable vectors assigned to different languages. These embeddings are discarded for two main reasons: (1) mPLMs are expected to have a single, unified p
Externí odkaz:
http://arxiv.org/abs/2409.18199
Autor:
Liu, Yihong, Wang, Mingyang, Kargaran, Amir Hossein, Imani, Ayyoob, Xhelili, Orgest, Ye, Haotian, Ma, Chunlan, Yvon, François, Schütze, Hinrich
Recent studies have shown that post-aligning multilingual pretrained language models (mPLMs) using alignment objectives on both original and transliterated data can improve crosslingual alignment. This improvement further leads to better crosslingual
Externí odkaz:
http://arxiv.org/abs/2409.17326
Autor:
Wang, Jike, Feng, Jianwen, Kang, Yu, Pan, Peichen, Ge, Jingxuan, Wang, Yan, Wang, Mingyang, Wu, Zhenxing, Zhang, Xingcai, Yu, Jiameng, Zhang, Xujun, Wang, Tianyue, Wen, Lirong, Yan, Guangning, Deng, Yafeng, Shi, Hui, Hsieh, Chang-Yu, Jiang, Zhihui, Hou, Tingjun
We propose AMP-Designer, an LLM-based foundation model approach for the rapid design of novel antimicrobial peptides (AMPs) with multiple desired properties. Within 11 days, AMP-Designer enables de novo design of 18 novel candidates with broad-spectr
Externí odkaz:
http://arxiv.org/abs/2407.12296
Autor:
Wang, Jike, Qin, Rui, Wang, Mingyang, Fang, Meijing, Zhang, Yangyang, Zhu, Yuchen, Su, Qun, Gou, Qiaolin, Shen, Chao, Zhang, Odin, Wu, Zhenxing, Jiang, Dejun, Zhang, Xujun, Zhao, Huifeng, Wan, Xiaozhe, Wu, Zhourui, Liu, Liwei, Kang, Yu, Hsieh, Chang-Yu, Hou, Tingjun
Significant interests have recently risen in leveraging sequence-based large language models (LLMs) for drug design. However, most current applications of LLMs in drug discovery lack the ability to comprehend three-dimensional (3D) structures, thereb
Externí odkaz:
http://arxiv.org/abs/2407.07930
In real-world environments, continual learning is essential for machine learning models, as they need to acquire new knowledge incrementally without forgetting what they have already learned. While pretrained language models have shown impressive cap
Externí odkaz:
http://arxiv.org/abs/2406.18708
Large language models (LLMs) possess extensive parametric knowledge, but this knowledge is difficult to update with new information because retraining is very expensive and infeasible for closed-source models. Knowledge editing (KE) has emerged as a
Externí odkaz:
http://arxiv.org/abs/2406.17764
Autor:
Shao, Zhenwei, Yu, Zhou, Yu, Jun, Ouyang, Xuecheng, Zheng, Lihao, Gai, Zhenbiao, Wang, Mingyang, Ding, Jiajun
By harnessing the capabilities of large language models (LLMs), recent large multimodal models (LMMs) have shown remarkable versatility in open-world multimodal understanding. Nevertheless, they are usually parameter-heavy and computation-intensive,
Externí odkaz:
http://arxiv.org/abs/2405.12107