Zobrazeno 1 - 10
of 319
pro vyhledávání: '"REN Yiming"'
Publikováno v:
Jixie chuandong, Vol 47, Pp 75-81 (2023)
In this study, a hybrid type (2-PUR/UPS) &R cervical vertebra rehabilitation machine is proposed for the cervical traction rehabilitation training of patients with cervical spondylosis. The machine has 3R1T four degrees of freedom and meets the requi
Externí odkaz:
https://doaj.org/article/625048b951a944cd9e589f84bf9b6e6a
Autor:
REN Yiming, LI Rui, TANG Si, JI Guanghe, QIN Zelin, ZHAO Leying, TONG Jiaxi, CHEN Yuxuan, GAO Jiaqi, YU Mingkun, RONG Hongguo, XIA Ruyu, FEI Yutong
Publikováno v:
Xiehe Yixue Zazhi, Vol 14, Iss 5, Pp 1076-1083 (2023)
Objective This study aims to analyze the methodological characteristics of qualitative interview studies investigating the public's knowledge and experiences regarding pediatric clinical trials and to provide reference for the design and implementati
Externí odkaz:
https://doaj.org/article/11f035ebe85d483ab0694069c72e41d1
Autor:
Gao, Zhangwei, Chen, Zhe, Cui, Erfei, Ren, Yiming, Wang, Weiyun, Zhu, Jinguo, Tian, Hao, Ye, Shenglong, He, Junjun, Zhu, Xizhou, Lu, Lewei, Lu, Tong, Qiao, Yu, Dai, Jifeng, Wang, Wenhai
Multimodal large language models (MLLMs) have demonstrated impressive performance in vision-language tasks across a broad spectrum of domains. However, the large model scale and associated high computational costs pose significant challenges for trai
Externí odkaz:
http://arxiv.org/abs/2410.16261
Autor:
Zhong, Tianyang, Liu, Zhengliang, Pan, Yi, Zhang, Yutong, Zhou, Yifan, Liang, Shizhe, Wu, Zihao, Lyu, Yanjun, Shu, Peng, Yu, Xiaowei, Cao, Chao, Jiang, Hanqi, Chen, Hanxu, Li, Yiwei, Chen, Junhao, Hu, Huawen, Liu, Yihen, Zhao, Huaqin, Xu, Shaochen, Dai, Haixing, Zhao, Lin, Zhang, Ruidong, Zhao, Wei, Yang, Zhenyuan, Chen, Jingyuan, Wang, Peilong, Ruan, Wei, Wang, Hui, Zhao, Huan, Zhang, Jing, Ren, Yiming, Qin, Shihuan, Chen, Tong, Li, Jiaxi, Zidan, Arif Hassan, Jahin, Afrar, Chen, Minheng, Xia, Sichen, Holmes, Jason, Zhuang, Yan, Wang, Jiaqi, Xu, Bochen, Xia, Weiran, Yu, Jichao, Tang, Kaibo, Yang, Yaxuan, Sun, Bolun, Yang, Tao, Lu, Guoyu, Wang, Xianqiao, Chai, Lilong, Li, He, Lu, Jin, Sun, Lichao, Zhang, Xin, Ge, Bao, Hu, Xintao, Zhang, Lian, Zhou, Hua, Zhang, Lu, Zhang, Shu, Liu, Ninghao, Jiang, Bei, Kong, Linglong, Xiang, Zhen, Ren, Yudan, Liu, Jun, Jiang, Xi, Bao, Yu, Zhang, Wei, Li, Xiang, Li, Gang, Liu, Wei, Shen, Dinggang, Sikora, Andrea, Zhai, Xiaoming, Zhu, Dajiang, Liu, Tianming
This comprehensive study evaluates the performance of OpenAI's o1-preview large language model across a diverse array of complex reasoning tasks, spanning multiple domains, including computer science, mathematics, natural sciences, medicine, linguist
Externí odkaz:
http://arxiv.org/abs/2409.18486
Human motion prediction is crucial for human-centric multimedia understanding and interacting. Current methods typically rely on ground truth human poses as observed input, which is not practical for real-world scenarios where only raw visual sensor
Externí odkaz:
http://arxiv.org/abs/2408.08202
LiDAR-based human motion capture has garnered significant interest in recent years for its practicability in large-scale and unconstrained environments. However, most methods rely on cleanly segmented human point clouds as input, the accuracy and smo
Externí odkaz:
http://arxiv.org/abs/2407.09833
Autor:
Wang, Weiyun, Zhang, Shuibo, Ren, Yiming, Duan, Yuchen, Li, Tiantong, Liu, Shuo, Hu, Mengkang, Chen, Zhe, Zhang, Kaipeng, Lu, Lewei, Zhu, Xizhou, Luo, Ping, Qiao, Yu, Dai, Jifeng, Shao, Wenqi, Wang, Wenhai
With the rapid advancement of multimodal large language models (MLLMs), their evaluation has become increasingly comprehensive. However, understanding long multimodal content, as a foundational ability for real-world applications, remains underexplor
Externí odkaz:
http://arxiv.org/abs/2406.07230
Human-centric Point Cloud Video Understanding (PVU) is an emerging field focused on extracting and interpreting human-related features from sequences of human point clouds, further advancing downstream human-centric tasks and applications. Previous w
Externí odkaz:
http://arxiv.org/abs/2403.20031
Autor:
Cong, Peishan, Wang, Ziyi, Dou, Zhiyang, Ren, Yiming, Yin, Wei, Cheng, Kai, Sun, Yujing, Long, Xiaoxiao, Zhu, Xinge, Ma, Yuexin
Language-guided scene-aware human motion generation has great significance for entertainment and robotics. In response to the limitations of existing datasets, we introduce LaserHuman, a pioneering dataset engineered to revolutionize Scene-Text-to-Mo
Externí odkaz:
http://arxiv.org/abs/2403.13307
Autor:
Wang, Weiyun, Ren, Yiming, Luo, Haowen, Li, Tiantong, Yan, Chenxiang, Chen, Zhe, Wang, Wenhai, Li, Qingyun, Lu, Lewei, Zhu, Xizhou, Qiao, Yu, Dai, Jifeng
We present the All-Seeing Project V2: a new model and dataset designed for understanding object relations in images. Specifically, we propose the All-Seeing Model V2 (ASMv2) that integrates the formulation of text generation, object localization, and
Externí odkaz:
http://arxiv.org/abs/2402.19474