Zobrazeno 1 - 10
of 422
pro vyhledávání: '"Huang Shujian"'
Proteins are essential macromolecules defined by their amino acid sequences, which determine their three-dimensional structures and, consequently, their functions in all living organisms. Therefore, generative protein modeling necessitates a multimod
Externí odkaz:
http://arxiv.org/abs/2410.13782
Having been trained on massive pretraining data, large language models have shown excellent performance on many knowledge-intensive tasks. However, pretraining data tends to contain misleading and even conflicting information, and it is intriguing to
Externí odkaz:
http://arxiv.org/abs/2410.04784
Autor:
Saeed El‐Ashram, Yu Zhang, Yongsheng Ji, Dina Salama, Kun Mei, Li Zhili, Huang Shujian, Haoji Zhang, Shawky M. Aboelhadid, Reem A. Alajmi, Dina M. Metwally, Manal F. El‐Khadragy, Billy M. Hargis, Guillermo Tellez‐Isaias, Beniamino T. Cenci‐Goga, Musafiri Karama, Munyaradzi C. Marufu, Fathi Abouhajer, Gamal Ali Abdelhafez Hamady, Abeer El Wakil, Ibrahim Al Nasr, Xun Suo
Publikováno v:
Veterinary Medicine and Science, Vol 7, Iss 2, Pp 357-361 (2021)
Abstract This study describes a simple method for the large‐scale isolation of pure Toxoplasma gondii tachyzoites and bradyzoites. T. gondii tachyzoites were obtained from infected human foreskin fibroblasts (HFFs) and peritoneal exudates of mice,
Externí odkaz:
https://doaj.org/article/5c5de8061d014bf398728939b7146545
Autor:
Zhou, Hao, Wang, Zhijun, Huang, Shujian, Huang, Xin, Han, Xue, Feng, Junlan, Deng, Chao, Luo, Weihua, Chen, Jiajun
Large Language Models (LLMs) are often English-centric due to the disproportionate distribution of languages in their pre-training data. Enhancing non-English language capabilities through post-pretraining often results in catastrophic forgetting of
Externí odkaz:
http://arxiv.org/abs/2408.11396
Autor:
Zhuang, Ziyuan, Zhang, Zhiyang, Cheng, Sitao, Yang, Fangkai, Liu, Jia, Huang, Shujian, Lin, Qingwei, Rajmohan, Saravan, Zhang, Dongmei, Zhang, Qi
Retrieval-augmented generation (RAG) methods encounter difficulties when addressing complex questions like multi-hop queries. While iterative retrieval methods improve performance by gathering additional information, current approaches often rely on
Externí odkaz:
http://arxiv.org/abs/2408.04259
Autor:
Ding, Peng, Wu, Jingyu, Kuang, Jun, Ma, Dan, Cao, Xuezhi, Cai, Xunliang, Chen, Shi, Chen, Jiajun, Huang, Shujian
Multi-modal Large Language Models (MLLMs) have demonstrated remarkable performance on various visual-language understanding and generation tasks. However, MLLMs occasionally generate content inconsistent with the given images, which is known as "hall
Externí odkaz:
http://arxiv.org/abs/2408.01355
Large language models demonstrate reasonable multilingual abilities, despite predominantly English-centric pretraining. However, the spontaneous multilingual alignment in these models is shown to be weak, leading to unsatisfactory cross-lingual trans
Externí odkaz:
http://arxiv.org/abs/2407.16222
Decoding by contrasting layers (DoLa), is designed to improve the generation quality of large language models (LLMs) by contrasting the prediction probabilities between an early exit output (amateur logits) and the final output (expert logits). Howev
Externí odkaz:
http://arxiv.org/abs/2407.10795
Autor:
Hu, Peng, Liu, Sizhe, Gao, Changjiang, Huang, Xin, Han, Xue, Feng, Junlan, Deng, Chao, Huang, Shujian
Large Language Models have demonstrated impressive reasoning capabilities across multiple languages. However, the relationship between capabilities in different languages is less explored. In this work, we decompose the process of reasoning tasks int
Externí odkaz:
http://arxiv.org/abs/2406.16655
Large Language Models (LLMs) possess extensive knowledge and strong capabilities in performing in-context reasoning. However, previous work challenges their out-of-context reasoning ability, i.e., the ability to infer information from their training
Externí odkaz:
http://arxiv.org/abs/2406.07393