Zobrazeno 1 - 10
of 1 677
pro vyhledávání: '"ZHANG Chenchen"'
Autor:
Gan Guangming, Chen Mei, Yu Qinfeng, Gao Xiang, Zhang Chenchen, Sheng Qingyuan, Xie Wei, Geng Junhua
Publikováno v:
Frontiers in Cellular Neuroscience, Vol 17 (2023)
The Drosophila larval neuromuscular junction (NMJ) is a well-known model system and is often used to study synapse development. Here, we show synaptic degeneration at NMJ boutons, primarily based on transmission electron microscopy (TEM) studies. Whe
Externí odkaz:
https://doaj.org/article/a8f5b76d063f43cc93f5832c0c4f337e
Autor:
Wang, Pei, Wu, Yanan, Wang, Zekun, Liu, Jiaheng, Song, Xiaoshuai, Peng, Zhongyuan, Deng, Ken, Zhang, Chenchen, Wang, Jiakai, Peng, Junran, Zhang, Ge, Guo, Hangyu, Zhang, Zhaoxiang, Su, Wenbo, Zheng, Bo
Large Language Models (LLMs) have displayed massive improvements in reasoning and decision-making skills and can hold natural conversations with users. Recently, many tool-use benchmark datasets have been proposed. However, existing datasets have the
Externí odkaz:
http://arxiv.org/abs/2410.11710
Autor:
Liu, Jiaheng, Zhang, Chenchen, Guo, Jinyang, Zhang, Yuanxing, Que, Haoran, Deng, Ken, Bai, Zhiqi, Liu, Jie, Zhang, Ge, Wang, Jiakai, Wu, Yanan, Liu, Congnan, Su, Wenbo, Wang, Jiamang, Qu, Lin, Zheng, Bo
Despite the advanced intelligence abilities of large language models (LLMs) in various applications, they still face significant computational and storage demands. Knowledge Distillation (KD) has emerged as an effective strategy to improve the perfor
Externí odkaz:
http://arxiv.org/abs/2407.16154
Autor:
Gavin, Shawn, Zheng, Tuney, Liu, Jiaheng, Que, Quehry, Wang, Noah, Yang, Jian, Zhang, Chenchen, Huang, Wenhao, Chen, Wenhu, Zhang, Ge
The long-context capabilities of large language models (LLMs) have been a hot topic in recent years. To evaluate the performance of LLMs in different scenarios, various assessment benchmarks have emerged. However, as most of these benchmarks focus on
Externí odkaz:
http://arxiv.org/abs/2406.17588
Autor:
Wang, Leyan, Jin, Yonggang, Shen, Tianhao, Zheng, Tianyu, Du, Xinrun, Zhang, Chenchen, Huang, Wenhao, Liu, Jiaheng, Wang, Shi, Zhang, Ge, Xiang, Liuyu, He, Zhaofeng
As large language models (LLMs) continue to develop and gain widespread application, the ability of LLMs to exhibit empathy towards diverse group identities and understand their perspectives is increasingly recognized as critical. Most existing bench
Externí odkaz:
http://arxiv.org/abs/2406.14903
Autor:
Que, Haoran, Liu, Jiaheng, Zhang, Ge, Zhang, Chenchen, Qu, Xingwei, Ma, Yinghao, Duan, Feiyu, Bai, Zhiqi, Wang, Jiakai, Zhang, Yuanxing, Tan, Xu, Fu, Jie, Su, Wenbo, Wang, Jiamang, Qu, Lin, Zheng, Bo
Continual Pre-Training (CPT) on Large Language Models (LLMs) has been widely used to expand the model's fundamental understanding of specific downstream domains (e.g., math and code). For the CPT on domain-specific LLMs, one important question is how
Externí odkaz:
http://arxiv.org/abs/2406.01375
Autor:
Deng, Ken, Liu, Jiaheng, Zhu, He, Liu, Congnan, Li, Jingxin, Wang, Jiakai, Zhao, Peng, Zhang, Chenchen, Wu, Yanan, Yin, Xueqiao, Zhang, Yuanxing, Su, Wenbo, Xiang, Bangyu, Ge, Tiezheng, Zheng, Bo
Code completion models have made significant progress in recent years. Recently, repository-level code completion has drawn more attention in modern software development, and several baseline methods and benchmarks have been proposed. However, existi
Externí odkaz:
http://arxiv.org/abs/2406.01359
Publikováno v:
Scientific Reports, Vol 10, Iss 1, Pp 1-15 (2020)
Abstract The lungs and skin are important respiratory organs in Anura, but the pulmonary structure of amphibians remains unclear due to the lack of a suitable procedure. This study improved the procedure used for fixing lungs tissues and used light m
Externí odkaz:
https://doaj.org/article/4303070611cf402a822706fbb94a66e8
Publikováno v:
Di-san junyi daxue xuebao, Vol 41, Iss 18, Pp 1782-1788 (2019)
Objective To assess the effect of different physical therapies for pulmonary rehabilitation in patients with acute exacerbation of severe chronic obstructive pulmonary diseases (AECOPD). Methods Between January, 2016 and September, 2018, a total of 6
Externí odkaz:
https://doaj.org/article/a652396aba1d4c899bc2b04799d11df3
Autor:
Zhang, Ge, Qu, Scott, Liu, Jiaheng, Zhang, Chenchen, Lin, Chenghua, Yu, Chou Leuang, Pan, Danny, Cheng, Esther, Liu, Jie, Lin, Qunshu, Yuan, Raven, Zheng, Tuney, Pang, Wei, Du, Xinrun, Liang, Yiming, Ma, Yinghao, Li, Yizhi, Ma, Ziyang, Lin, Bill, Benetos, Emmanouil, Yang, Huan, Zhou, Junting, Ma, Kaijing, Liu, Minghao, Niu, Morry, Wang, Noah, Que, Quehry, Liu, Ruibo, Liu, Sine, Guo, Shawn, Gao, Soren, Zhou, Wangchunshu, Zhang, Xinyue, Zhou, Yizhi, Wang, Yubo, Bai, Yuelin, Zhang, Yuhan, Zhang, Yuxiang, Wang, Zenith, Yang, Zhenzhu, Zhao, Zijian, Zhang, Jiajun, Ouyang, Wanli, Huang, Wenhao, Chen, Wenhu
Large Language Models (LLMs) have made great strides in recent years to achieve unprecedented performance across different tasks. However, due to commercial interest, the most competitive models like GPT, Gemini, and Claude have been gated behind pro
Externí odkaz:
http://arxiv.org/abs/2405.19327