Zobrazeno 1 - 10
of 30
pro vyhledávání: '"Li, Siyuan"'
Demonstrations are widely used in Deep Reinforcement Learning (DRL) for facilitating solving tasks with sparse rewards. However, the tasks in real-world scenarios can often have varied initial conditions from the demonstration, which would require ad
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::1719c1264acf7e692eca1837274aec4f
http://arxiv.org/abs/2307.02889
http://arxiv.org/abs/2307.02889
Autor:
Tan, Cheng, Li, Siyuan, Gao, Zhangyang, Guan, Wenfei, Wang, Zedong, Liu, Zicheng, Wu, Lirong, Li, Stan Z.
Spatio-temporal predictive learning is a learning paradigm that enables models to learn spatial and temporal patterns by predicting future frames from given past frames in an unsupervised manner. Despite remarkable progress in recent years, a lack of
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::c4576b650051db5dcf72c7520d991eab
http://arxiv.org/abs/2306.11249
http://arxiv.org/abs/2306.11249
In the field of artificial intelligence for science, it is consistently an essential challenge to face a limited amount of labeled data for real-world problems. The prevailing approach is to pretrain a powerful task-agnostic model on a large unlabele
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::fc209062d7eb083f5101d0ab7101da28
http://arxiv.org/abs/2304.03906
http://arxiv.org/abs/2304.03906
Autor:
Zheng, Jiangbin, Wang, Ge, Huang, Yufei, Hu, Bozhen, Li, Siyuan, Tan, Cheng, Fan, Xinwen, Li, Stan Z.
Pretrained protein structure models without labels are crucial foundations for the majority of protein downstream applications. The conventional structure pretraining methods follow the mature natural language pretraining methods such as denoised rec
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::09111eb71e5f892574ed6b50f1b360c5
http://arxiv.org/abs/2303.11783
http://arxiv.org/abs/2303.11783
Autor:
Ye, Mingqiao, Ke, Lei, Li, Siyuan, Tai, Yu-Wing, Tang, Chi-Keung, Danelljan, Martin, Yu, Fisher
Object localization in general environments is a fundamental part of vision systems. While dominating on the COCO benchmark, recent Transformer-based detection methods are not competitive in diverse domains. Moreover, these methods still struggle to
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::1d48b8490c6785dbe9d4f1bac03c4078
Autor:
Wu, Fang, Li, Siyuan, Wu, Lirong, Radev, Dragomir, Jiang, Yinghui, Jin, Xurui, Niu, Zhangming, Li, Stan Z.
The great success in graph neural networks (GNNs) provokes the question about explainability: Which fraction of the input graph is the most determinant of the prediction? Particularly, parametric explainers prevail in existing approaches because of t
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::b36601738ae9e84d2e0be7ff6c497b21
Autor:
Wang, Xiao, Zhou, Weikang, Zu, Can, Xia, Han, Chen, Tianze, Zhang, Yuansen, Zheng, Rui, Ye, Junjie, Zhang, Qi, Gui, Tao, Kang, Jihua, Yang, Jingsheng, Li, Siyuan, Du, Chunsai
Large language models have unlocked strong multi-task capabilities from reading instructive prompts. However, recent studies have shown that existing large models still have difficulty with information extraction tasks. For example, gpt-3.5-turbo ach
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::17018d9ba28379ec5bfad92ba8a8f353
Autor:
Zheng, Jiangbin, Wang, Yile, Tan, Cheng, Li, Siyuan, Wang, Ge, Xia, Jun, Chen, Yidong, Li, Stan Z.
Sign language recognition (SLR) is a weakly supervised task that annotates sign videos as textual glosses. Recent studies show that insufficient training caused by the lack of large-scale available sign datasets becomes the main bottleneck for SLR. M
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::cbd8f3595fe22f2460b1ee7dfcac787a
Offline reinforcement learning (RL) enables the agent to effectively learn from logged data, which significantly extends the applicability of RL algorithms in real-world scenarios where exploration can be expensive or unsafe. Previous works have show
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::4e7f5ffa27b3b87e7a1e916c6ef89739
http://arxiv.org/abs/2212.01105
http://arxiv.org/abs/2212.01105
We investigate a practical domain adaptation task, called source-free domain adaptation (SFUDA), where the source-pretrained model is adapted to the target domain without access to the source data. Existing techniques mainly leverage self-supervised
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::b9250d980f37e884e313dffc375cea12
http://arxiv.org/abs/2211.06612
http://arxiv.org/abs/2211.06612