Zobrazeno 1 - 10
of 59 956
pro vyhledávání: '"Yali An"'
Autor:
Fei Chen, Yifan He, Jinping Wang, Liping Yu, Qiuhong Gong, Yanyan Chen, Yali An, Siyao He, Guangwei Li, Bo Zhang
Publikováno v:
Journal of Diabetes, Vol 16, Iss 8, Pp n/a-n/a (2024)
Abstract Background This study aimed to investigate the potential differences in the influence of impaired glucose tolerance (IGT) with and without metabolic syndrome (MetS) on cardiovascular (CV) events and mortality. Methods Participants having IGT
Externí odkaz:
https://doaj.org/article/57cae56239f2473faeb0245f68e486a8
Autor:
Xin Qian, Jinping Wang, Qiuhong Gong, Yali An, Xinxing Feng, Siyao He, Xiaoping Chen, Wenjuan Wang, Lihong Zhang, Yuanchi Hui, Xiuwei Zhai, Bo Zhang, Yanyan Chen, Guangwei Li
Publikováno v:
PLoS Medicine, Vol 21, Iss 7, p e1004419 (2024)
BackgroundThe association between years of non-diabetes status after diagnosis of impaired glucose tolerance (IGT) and the risk of long-term death and cardiovascular outcomes needed to be clarified.Methods and findingsIn this post hoc analysis, we in
Externí odkaz:
https://doaj.org/article/ebd58b5f866a4c70b327221303ac4f05
Autor:
Xin Qian, Hongmei Jia, Jinping Wang, Siyao He, Meng Yu, Xinxing Feng, Qiuhong Gong, Yali An, Xuan Wang, Na Shi, Hui Li, Zhongmei Zou, Guangwei Li, Yanyan Chen
Publikováno v:
Cardiovascular Diabetology, Vol 23, Iss 1, Pp 1-10 (2024)
Abstract Background Higher levels of palmitoyl sphingomyelin (PSM, synonymous with sphingomyelin 16:0) are associated with an increased risk of cardiovascular disease (CVD) in people with diabetes. Whether circulating PSM levels can practically predi
Externí odkaz:
https://doaj.org/article/28db9edbc90d437db015c478a36b3cc2
Autor:
Fei Chen, Jinping Wang, Xiaoping Chen, Liping Yu, Yali An, Qiuhong Gong, Bo Chen, Shuo Xie, Lihong Zhang, Ying Shuai, Fang Zhao, Yanyan Chen, Guangwei Li, Bo Zhang
Publikováno v:
Diabetology & Metabolic Syndrome, Vol 15, Iss 1, Pp 1-7 (2023)
Abstract Background This study aimed to develop cardiovascular disease (CVD) risk equations for Chinese patients with newly diagnosed type 2 diabetes (T2D) to predict 10-, 20-, and 30-year of risk. Methods Risk equations for forecasting the occurrenc
Externí odkaz:
https://doaj.org/article/bdaf479956f84855aabd6db2fb1267ec
Vision-language foundation models (such as CLIP) have recently shown their power in transfer learning, owing to large-scale image-text pre-training. However, target domain data in the downstream tasks can be highly different from the pre-training pha
Externí odkaz:
http://arxiv.org/abs/2410.12183
Remote sensing image change detection (RSCD) is crucial for monitoring dynamic surface changes, with applications ranging from environmental monitoring to disaster assessment. While traditional CNN-based methods have improved detection accuracy, they
Externí odkaz:
http://arxiv.org/abs/2410.11580
Autor:
Nazarczuk, Michal, Catley-Chandar, Sibi, Tanay, Thomas, Shaw, Richard, Pérez-Pellitero, Eduardo, Timofte, Radu, Yan, Xing, Wang, Pan, Guo, Yali, Wu, Yongxin, Cai, Youcheng, Yang, Yanan, Li, Junting, Zhou, Yanghong, Mok, P. Y., He, Zongqi, Xiao, Zhe, Chan, Kin-Chung, Goshu, Hana Lebeta, Yang, Cuixin, Dong, Rongkang, Xiao, Jun, Lam, Kin-Man, Hao, Jiayao, Gao, Qiong, Zu, Yanyan, Zhang, Junpei, Jiao, Licheng, Liu, Xu, Purohit, Kuldeep
This paper reviews the challenge on Sparse Neural Rendering that was part of the Advances in Image Manipulation (AIM) workshop, held in conjunction with ECCV 2024. This manuscript focuses on the competition set-up, the proposed methods and their resp
Externí odkaz:
http://arxiv.org/abs/2409.15045
Analyzing real-world multimodal signals is an essential and challenging task for intelligent voice assistants (IVAs). Mainstream approaches have achieved remarkable performance on various downstream tasks of IVAs with pre-trained audio models and tex
Externí odkaz:
http://arxiv.org/abs/2409.09289
Contrastive learning has become one of the most impressive approaches for multi-modal representation learning. However, previous multi-modal works mainly focused on cross-modal understanding, ignoring in-modal contrastive learning, which limits the r
Externí odkaz:
http://arxiv.org/abs/2409.09282
With the goal of more natural and human-like interaction with virtual voice assistants, recent research in the field has focused on full duplex interaction mode without relying on repeated wake-up words. This requires that in scenes with complex soun
Externí odkaz:
http://arxiv.org/abs/2409.09284