Zobrazeno 1 - 10
of 563
pro vyhledávání: '"Zhou Yifei"'
Autor:
Tian Ruimeng, Cheng Jing, Wang Yanna, Chen Chuncui, Huang Lei, Zhang Caiyun, Zhou Yifei, Dong Shanshan, Lu Guilin, Qin Wenjuan
Publikováno v:
Acta Biochimica et Biophysica Sinica, Vol 56, Pp 140-143 (2023)
Externí odkaz:
https://doaj.org/article/c4db202bb1b242f09210cde80a421e07
Autor:
Qin Wenjuan, Shi Wenrong, Tian Ruimeng, Wang Yanna, Feng Jia, Zhou Yifei, Dong Shanshan, Cheng Jing, Zhang Caiyun, Lu Guilin
Publikováno v:
Acta Biochimica et Biophysica Sinica, Vol 55, Pp 882-884 (2023)
Externí odkaz:
https://doaj.org/article/559ba5cf4da9489397dc44467cddfb88
Building generalist robotic systems involves effectively endowing robots with the capabilities to handle novel objects in an open-world setting. Inspired by the advances of large pre-trained models, we propose Keypoint Affordance Learning from Imagin
Externí odkaz:
http://arxiv.org/abs/2409.14066
Autor:
Zhou, Yifei, Liu, Sitong
Self-supervised learning has been a powerful training paradigm to facilitate representation learning. In this study, we design a masked autoencoder (MAE) to guide deep learning models to learn electroencephalography (EEG) signal representation. Our M
Externí odkaz:
http://arxiv.org/abs/2408.05375
Training corpuses for vision language models (VLMs) typically lack sufficient amounts of decision-centric data. This renders off-the-shelf VLMs sub-optimal for decision-making tasks such as in-the-wild device control through graphical user interfaces
Externí odkaz:
http://arxiv.org/abs/2406.11896
Autor:
Kong, Lingkai, Wang, Haorui, Mu, Wenhao, Du, Yuanqi, Zhuang, Yuchen, Zhou, Yifei, Song, Yue, Zhang, Rongzhi, Wang, Kai, Zhang, Chao
Aligning large language models (LLMs) with human objectives is crucial for real-world applications. However, fine-tuning LLMs for alignment often suffers from unstable training and requires substantial computing resources. Test-time alignment techniq
Externí odkaz:
http://arxiv.org/abs/2406.05954
Autor:
Zhai, Yuexiang, Bai, Hao, Lin, Zipeng, Pan, Jiayi, Tong, Shengbang, Zhou, Yifei, Suhr, Alane, Xie, Saining, LeCun, Yann, Ma, Yi, Levine, Sergey
Large vision-language models (VLMs) fine-tuned on specialized visual instruction-following data have exhibited impressive language reasoning capabilities across various scenarios. However, this fine-tuning paradigm may not be able to efficiently lear
Externí odkaz:
http://arxiv.org/abs/2405.10292
Publikováno v:
Australasian Orthodontic Journal, Vol 35, Iss 2, Pp 127-133 (2019)
To investigate the effect of combined oral contraceptives (COC) on orthodontic tooth movement (OTM) and periodontal remodelling in a female rat model.
Externí odkaz:
https://doaj.org/article/35d47e1c9044434096dbdce6c41d5858
We show that domain-general automatic evaluators can significantly improve the performance of agents for web navigation and device control. We experiment with multiple evaluation models that trade off between inference cost, modularity of design, and
Externí odkaz:
http://arxiv.org/abs/2404.06474
A broad use case of large language models (LLMs) is in goal-directed decision-making tasks (or "agent" tasks), where an LLM needs to not just generate completions for a given prompt, but rather make intelligent decisions over a multi-turn interaction
Externí odkaz:
http://arxiv.org/abs/2402.19446