Zobrazeno 1 - 10
of 472
pro vyhledávání: '"Ping, Qing"'
Graph-Aware Language Model Pre-Training on a Large Graph Corpus Can Help Multiple Graph Applications
Autor:
Xie, Han, Zheng, Da, Ma, Jun, Zhang, Houyu, Ioannidis, Vassilis N., Song, Xiang, Ping, Qing, Wang, Sheng, Yang, Carl, Xu, Yi, Zeng, Belinda, Chilimbi, Trishul
Model pre-training on large text corpora has been demonstrated effective for various downstream applications in the NLP domain. In the graph mining domain, a similar analogy can be drawn for pre-training graph models on large graphs in the hope of be
Externí odkaz:
http://arxiv.org/abs/2306.02592
Autor:
Sheng Zhao, Zuoxiang Wang, Ping Qing, Minghui Li, Qingrong Liu, Xuejie Pang, Keke Wang, Xiaojin Gao, Jie Zhao, Yongjian Wu
Publikováno v:
Cardiovascular Diabetology, Vol 23, Iss 1, Pp 1-18 (2024)
Abstract Background The triglyceride-glucose (TyG) index is associated with the development and prognosis of coronary artery disease (CAD). However, the impact of the TyG index on CAD severity across different glucose metabolism states exhibits signi
Externí odkaz:
https://doaj.org/article/1695210744be46f59dda731e330f811a
Autor:
Sheng Zhao, Zuoxiang Wang, Ping Qing, Minghui Li, Qingrong Liu, Keke Wang, Xiaojin Gao, Jie Zhao, Yongjian Wu
Publikováno v:
Diabetology & Metabolic Syndrome, Vol 16, Iss 1, Pp 1-11 (2024)
Abstract Background Mounting evidence supports a significant correlation between the stress hyperglycemia ratio (SHR) and both short- and long-term prognoses in patients with acute coronary syndrome (ACS). Nevertheless, research examining the associa
Externí odkaz:
https://doaj.org/article/1dd892baf3a546dcbfd398c3000aa049
Autor:
Jiang, Qian, Chen, Changyou, Zhao, Han, Chen, Liqun, Ping, Qing, Tran, Son Dinh, Xu, Yi, Zeng, Belinda, Chilimbi, Trishul
Contrastive loss has been increasingly used in learning representations from multiple modalities. In the limit, the nature of the contrastive loss encourages modalities to exactly match each other in the latent space. Yet it remains an open question
Externí odkaz:
http://arxiv.org/abs/2303.05952
Autor:
Xue-Ting Pei, Shu-Hua Wang, Guo-Ping Qing, Xiao-Wei Yu, Yan Shi, Wen-Li Yang, Ning-Li Wang, Zhi-Gang Fan
Publikováno v:
BMC Ophthalmology, Vol 24, Iss 1, Pp 1-9 (2024)
Abstract Background This study aims to investigate the morphologic features of the crystalline lens in Primary Angle Closure Disease (PACD) patients with zonular instability during cataract surgery using the swept-source CASIA 2 Anterior Segment-Opti
Externí odkaz:
https://doaj.org/article/ad0241201378424b8286998586365867
Publikováno v:
Future Foods, Vol 9, Iss , Pp 100294- (2024)
This study focuses on Chinese consumers’ cognition and attitude towards artificial meat, including cultured meat and plant-based meat. The attitude is measured by three aspects: willingness to accept (WTA), willingness to taste (WTT), and willingne
Externí odkaz:
https://doaj.org/article/e820f8cd2df24188ba7784c6b15f1584
Publikováno v:
BMC Medical Education, Vol 23, Iss 1, Pp 1-10 (2023)
Abstract Background Postgraduate medical education in oncology orthopedics confronts obstacles when instructing on pelvic tumors, primarily due to their intricate anatomy and the limitations of conventional teaching techniques. The employment of Thre
Externí odkaz:
https://doaj.org/article/f314786fe73f46adb91bcb299fbe006d
Publikováno v:
2022 IEEE International Conference on Data Mining Workshops (ICDMW), Orlando, FL, USA, 2022, pp. 958-966
To solve video-and-language grounding tasks, the key is for the network to understand the connection between the two modalities. For a pair of video and language description, their semantic relation is reflected by their encodings' similarity. A good
Externí odkaz:
http://arxiv.org/abs/2204.10938
Autor:
Bara, Cristian-Paul, Ping, Qing, Mathur, Abhinav, Thattai, Govind, MV, Rohith, Sukhatme, Gaurav S.
We introduce a novel privacy-preserving methodology for performing Visual Question Answering on the edge. Our method constructs a symbolic representation of the visual scene, using a low-complexity computer vision model that jointly predicts classes,
Externí odkaz:
http://arxiv.org/abs/2202.07712
Outside-knowledge visual question answering (OK-VQA) requires the agent to comprehend the image, make use of relevant knowledge from the entire web, and digest all the information to answer the question. Most previous works address the problem by fir
Externí odkaz:
http://arxiv.org/abs/2201.05299