Zobrazeno 1 - 10
of 1 880
pro vyhledávání: '"He Ran"'
Autor:
Mattlat Dominique, Villoro Ruben Bueno, Jung Chanwon, Naderloo Raana Hatami, He Ran, Nielsch Kornelius, Zavanelli Duncan, Snyder G. Jeffrey, Zhang Siyuan, Scheu Christina
Publikováno v:
BIO Web of Conferences, Vol 129, p 25055 (2024)
Externí odkaz:
https://doaj.org/article/a3aebe78415a41079d1a978478125562
Autor:
Yan Bo Chen, Hao Shi Bao, Ting Ting Hu, Zhou He, Biaolin Wen, Feng Tao Liu, Feng Xi Su, He Ran Deng, Jian Nan Wu
Publikováno v:
BMC Cancer, Vol 22, Iss 1, Pp 1-9 (2022)
Abstract Background Axillary vein/subclavian vein (AxV/SCV) and Internal jugular vein (IJV) are commonly used for implantable venous access port (IVAP) implantation in breast cancer patients for chemotherapy. Previous research focused on comparison o
Externí odkaz:
https://doaj.org/article/5830c0f29867457a81e9fe91ab414e97
Most incremental learners excessively prioritize coarse classes of objects while neglecting various kinds of states (e.g. color and material) attached to the objects. As a result, they are limited in the ability to reason fine-grained compositionalit
Externí odkaz:
http://arxiv.org/abs/2411.01739
Prompt-based all-in-one image restoration (IR) frameworks have achieved remarkable performance by incorporating degradation-specific information into prompt modules. Nevertheless, handling the complex and diverse degradations encountered in real-worl
Externí odkaz:
http://arxiv.org/abs/2410.15385
Autor:
Han, Xiaotian, Jian, Yiren, Hu, Xuefeng, Liu, Haogeng, Wang, Yiqi, Fan, Qihang, Ai, Yuang, Huang, Huaibo, He, Ran, Yang, Zhenheng, You, Quanzeng
Pre-training on large-scale, high-quality datasets is crucial for enhancing the reasoning capabilities of Large Language Models (LLMs), especially in specialized domains such as mathematics. Despite the recognized importance, the Multimodal LLMs (MLL
Externí odkaz:
http://arxiv.org/abs/2409.12568
Diffusion-based text-to-image generation models have significantly advanced the field of art content synthesis. However, current portrait stylization methods generally require either model fine-tuning based on examples or the employment of DDIM Inver
Externí odkaz:
http://arxiv.org/abs/2408.05492
Autor:
Fu, Chaoyou, Lin, Haojia, Long, Zuwei, Shen, Yunhang, Zhao, Meng, Zhang, Yifan, Dong, Shaoqi, Wang, Xiong, Yin, Di, Ma, Long, Zheng, Xiawu, He, Ran, Ji, Rongrong, Wu, Yunsheng, Shan, Caifeng, Sun, Xing
The remarkable multimodal capabilities and interactive experience of GPT-4o underscore their necessity in practical applications, yet open-source models rarely excel in both areas. In this paper, we introduce VITA, the first-ever open-source Multimod
Externí odkaz:
http://arxiv.org/abs/2408.05211
Low-rank adaptation, also known as LoRA, has emerged as a prominent method for parameter-efficient fine-tuning of foundation models. Despite its computational efficiency, LoRA still yields inferior performance compared to full fine-tuning. In this pa
Externí odkaz:
http://arxiv.org/abs/2407.18242
Test-time adaptation (TTA) aims to address the distribution shift between the training and test data with only unlabeled data at test time. Existing TTA methods often focus on improving recognition performance specifically for test data associated wi
Externí odkaz:
http://arxiv.org/abs/2407.15773
Video generation has made remarkable progress in recent years, especially since the advent of the video diffusion models. Many video generation models can produce plausible synthetic videos, e.g., Stable Video Diffusion (SVD). However, most video mod
Externí odkaz:
http://arxiv.org/abs/2406.00908