Zobrazeno 1 - 10
of 1 457
pro vyhledávání: '"Wang Hongyi"'
Autor:
Tao, Tianhua, Li, Junbo, Tan, Bowen, Wang, Hongyi, Marshall, William, Kanakiya, Bhargav M, Hestness, Joel, Vassilieva, Natalia, Shen, Zhiqiang, Xing, Eric P., Liu, Zhengzhong
Large Language Models (LLMs) specializing in code generation (which are also often referred to as code LLMs), e.g., StarCoder and Code Llama, play increasingly critical roles in various software development scenarios. It is also crucial for code LLMs
Externí odkaz:
http://arxiv.org/abs/2411.04156
Semi-supervised learning (SSL) for medical image segmentation is a challenging yet highly practical task, which reduces reliance on large-scale labeled dataset by leveraging unlabeled samples. Among SSL techniques, the weak-to-strong consistency fram
Externí odkaz:
http://arxiv.org/abs/2410.13486
Autor:
Zhao, Xinyu, Sun, Guoheng, Cai, Ruisi, Zhou, Yukun, Li, Pingzhi, Wang, Peihao, Tan, Bowen, He, Yexiao, Chen, Li, Liang, Yi, Chen, Beidi, Yuan, Binhang, Wang, Hongyi, Li, Ang, Wang, Zhangyang, Chen, Tianlong
As Large Language Models (LLMs) excel across tasks and specialized domains, scaling LLMs based on existing models has garnered significant attention, which faces the challenge of decreasing performance when combining disparate models. Various techniq
Externí odkaz:
http://arxiv.org/abs/2410.05357
The advancement of Spatial Transcriptomics (ST) has facilitated the spatially-aware profiling of gene expressions based on histopathology images. Although ST data offers valuable insights into the micro-environment of tumors, its acquisition cost rem
Externí odkaz:
http://arxiv.org/abs/2409.15092
The rapid development of Large Language Models (LLMs) has been pivotal in advancing AI, with pre-trained LLMs being adaptable to diverse downstream tasks through fine-tuning. Federated learning (FL) further enhances fine-tuning in a privacy-aware man
Externí odkaz:
http://arxiv.org/abs/2409.05976
Autor:
Ouyang, Shuyi, Wang, Hongyi, Niu, Ziwei, Bai, Zhenjia, Xie, Shiao, Xu, Yingying, Tong, Ruofeng, Chen, Yen-Wei, Lin, Lanfen
Publikováno v:
Proceedings of the 31st ACM International Conference on Multimedia. 2023: 4768-4777
The task of multi-label image classification involves recognizing multiple objects within a single image. Considering both valuable semantic information contained in the labels and essential visual features presented in the image, tight visual-lingui
Externí odkaz:
http://arxiv.org/abs/2407.16244
Autor:
Wang, Hongyi, Sun, Ji, Liang, Jinzhe, Zhai, Li, Tang, Zitian, Li, Zijian, Zhai, Wei, Wang, Xusheng, Gao, Weihao, Gong, Sheng
The ionic bonding across the lattice and ordered microscopic structures endow crystals with unique symmetry and determine their macroscopic properties. Unconventional crystals, in particular, exhibit non-traditional lattice structures or possess exot
Externí odkaz:
http://arxiv.org/abs/2407.16131
Vision Transformers (ViTs) have achieved remarkable performance in various image classification tasks by leveraging the attention mechanism to process image patches as tokens. However, the high computational and memory demands of ViTs pose significan
Externí odkaz:
http://arxiv.org/abs/2406.18051
Large Language Models (LLMs) have demonstrated exceptional task-solving capabilities, increasingly adopting roles akin to human-like assistants. The broader integration of LLMs into society has sparked interest in whether they manifest psychological
Externí odkaz:
http://arxiv.org/abs/2406.17675
Autor:
Fu, Tianyu, Huang, Haofeng, Ning, Xuefei, Zhang, Genghan, Chen, Boju, Wu, Tianqi, Wang, Hongyi, Huang, Zixiao, Li, Shiyao, Yan, Shengen, Dai, Guohao, Yang, Huazhong, Wang, Yu
Sparse attention can effectively mitigate the significant memory and throughput demands of Large Language Models (LLMs) in long contexts. Existing methods typically employ a uniform sparse attention mask, applying the same sparse pattern across diffe
Externí odkaz:
http://arxiv.org/abs/2406.14909