Zobrazeno 1 - 10
of 15 320
pro vyhledávání: '"Yufan An"'
Autor:
Zhongjie Yan, Yuanyu Wang, Yizhen Song, Yicong Ma, Yufan An, Ran Wen, Na Wang, Yun Huang, Xiuwen Wu
Publikováno v:
BMC Complementary Medicine and Therapies, Vol 23, Iss 1, Pp 1-11 (2023)
Abstract Background Notopterygii Rhizoma et Radix (NRR) is commonly used for the treatment of inflammation-linked diseases. Phenethylferulate (PF) is high content in NRR crude, but its anti-inflammatory effect remains unclear. Therefore, we aimed to
Externí odkaz:
https://doaj.org/article/f1d78ba86a414a21bb4763dc22352527
Mitigating Label Noise using Prompt-Based Hyperbolic Meta-Learning in Open-Set Domain Generalization
Autor:
Peng, Kunyu, Wen, Di, Saquib, Sarfraz M., Chen, Yufan, Zheng, Junwei, Schneider, David, Yang, Kailun, Wu, Jiamin, Roitberg, Alina, Stiefelhagen, Rainer
Open-Set Domain Generalization (OSDG) is a challenging task requiring models to accurately predict familiar categories while minimizing confidence for unknown categories to effectively reject them in unseen domains. While the OSDG field has seen cons
Externí odkaz:
http://arxiv.org/abs/2412.18342
Autor:
Chen, Junyu, Wei, Shuwen, Liu, Yihao, Bian, Zhangxing, He, Yufan, Carass, Aaron, Bai, Harrison, Du, Yong
Spatially varying regularization accommodates the deformation variations that may be necessary for different anatomical regions during deformable image registration. Historically, optimization-based registration models have harnessed spatially varyin
Externí odkaz:
http://arxiv.org/abs/2412.17982
Large multimodal models still struggle with text-rich images because of inadequate training data. Self-Instruct provides an annotation-free way for generating instruction data, but its quality is poor, as multimodal alignment remains a hurdle even fo
Externí odkaz:
http://arxiv.org/abs/2412.16364
Autor:
Xu, Frank F., Song, Yufan, Li, Boxuan, Tang, Yuxuan, Jain, Kritanjali, Bao, Mengxue, Wang, Zora Z., Zhou, Xuhui, Guo, Zhitong, Cao, Murong, Yang, Mingyang, Lu, Hao Yang, Martin, Amaad, Su, Zhe, Maben, Leander, Mehta, Raj, Chi, Wayne, Jang, Lawrence, Xie, Yiqing, Zhou, Shuyan, Neubig, Graham
We interact with computers on an everyday basis, be it in everyday life or work, and many aspects of work can be done entirely with access to a computer and the Internet. At the same time, thanks to improvements in large language models (LLMs), there
Externí odkaz:
http://arxiv.org/abs/2412.14161
Autor:
Shen, Xuan, Song, Zhao, Zhou, Yufa, Chen, Bo, Liu, Jing, Zhang, Ruiyi, Rossi, Ryan A., Tan, Hao, Yu, Tong, Chen, Xiang, Zhou, Yufan, Sun, Tong, Zhao, Pu, Wang, Yanzhi, Gu, Jiuxiang
Transformers have emerged as the leading architecture in deep learning, proving to be versatile and highly effective across diverse domains beyond language and image processing. However, their impressive performance often incurs high computational co
Externí odkaz:
http://arxiv.org/abs/2412.12441
We present SUGAR, a zero-shot method for subject-driven video customization. Given an input image, SUGAR is capable of generating videos for the subject contained in the image and aligning the generation with arbitrary visual attributes such as style
Externí odkaz:
http://arxiv.org/abs/2412.10533
Cross-border data transfer is vital for the digital economy by enabling data flow across different countries or regions. However, ensuring compliance with diverse data protection regulations during the transfer introduces significant complexities. Ex
Externí odkaz:
http://arxiv.org/abs/2412.08993
Deep learning models often struggle with generalization when deploying on real-world data, due to the common distributional shift to the training data. Test-time adaptation (TTA) is an emerging scheme used at inference time to address this issue. In
Externí odkaz:
http://arxiv.org/abs/2412.07980
Autor:
Zhang, Yedi, Cai, Yufan, Zuo, Xinyue, Luan, Xiaokun, Wang, Kailong, Hou, Zhe, Zhang, Yifan, Wei, Zhiyuan, Sun, Meng, Sun, Jun, Sun, Jing, Dong, Jin Song
Large Language Models (LLMs) have emerged as a transformative AI paradigm, profoundly influencing daily life through their exceptional language understanding and contextual generation capabilities. Despite their remarkable performance, LLMs face a cr
Externí odkaz:
http://arxiv.org/abs/2412.06512