Zobrazeno 1 - 10
of 648
pro vyhledávání: '"Sun, Tianyu"'
This paper studies point cloud perception within outdoor environments. Existing methods face limitations in recognizing objects located at a distance or occluded, due to the sparse nature of outdoor point clouds. In this work, we observe a significan
Externí odkaz:
http://arxiv.org/abs/2411.07742
Transparent and reflective objects, which are common in our everyday lives, present a significant challenge to 3D imaging techniques due to their unique visual and optical properties. Faced with these types of objects, RGB-D cameras fail to capture t
Externí odkaz:
http://arxiv.org/abs/2410.08567
Autor:
Li, Jianhao, Sun, Tianyu, Wang, Zhongdao, Xie, Enze, Feng, Bailan, Zhang, Hongbo, Yuan, Ze, Xu, Ke, Liu, Jiaheng, Luo, Ping
This paper proposes an algorithm for automatically labeling 3D objects from 2D point or box prompts, especially focusing on applications in autonomous driving. Unlike previous arts, our auto-labeler predicts 3D shapes instead of bounding boxes and do
Externí odkaz:
http://arxiv.org/abs/2407.11382
The advent of large language models (LLMs) has significantly advanced various fields, including natural language processing and automated dialogue systems. This paper explores the application of LLMs in psychological counseling, addressing the increa
Externí odkaz:
http://arxiv.org/abs/2406.13617
Geometrical mixed finite element methods for fourth order obstacle problems in linearised elasticity
Autor:
Piersanti, Paolo, Sun, Tianyu
This paper is devoted to the study of a novel mixed Finite Element Method for approximating the solutions of fourth order variational problems subjected to a constraint. The first problem we consider consists in establishing the convergence of the er
Externí odkaz:
http://arxiv.org/abs/2405.20338
Autor:
DeepSeek-AI, Liu, Aixin, Feng, Bei, Wang, Bin, Wang, Bingxuan, Liu, Bo, Zhao, Chenggang, Dengr, Chengqi, Ruan, Chong, Dai, Damai, Guo, Daya, Yang, Dejian, Chen, Deli, Ji, Dongjie, Li, Erhang, Lin, Fangyun, Luo, Fuli, Hao, Guangbo, Chen, Guanting, Li, Guowei, Zhang, H., Xu, Hanwei, Yang, Hao, Zhang, Haowei, Ding, Honghui, Xin, Huajian, Gao, Huazuo, Li, Hui, Qu, Hui, Cai, J. L., Liang, Jian, Guo, Jianzhong, Ni, Jiaqi, Li, Jiashi, Chen, Jin, Yuan, Jingyang, Qiu, Junjie, Song, Junxiao, Dong, Kai, Gao, Kaige, Guan, Kang, Wang, Lean, Zhang, Lecong, Xu, Lei, Xia, Leyi, Zhao, Liang, Zhang, Liyue, Li, Meng, Wang, Miaojun, Zhang, Mingchuan, Zhang, Minghua, Tang, Minghui, Li, Mingming, Tian, Ning, Huang, Panpan, Wang, Peiyi, Zhang, Peng, Zhu, Qihao, Chen, Qinyu, Du, Qiushi, Chen, R. J., Jin, R. L., Ge, Ruiqi, Pan, Ruizhe, Xu, Runxin, Chen, Ruyi, Li, S. S., Lu, Shanghao, Zhou, Shangyan, Chen, Shanhuang, Wu, Shaoqing, Ye, Shengfeng, Ma, Shirong, Wang, Shiyu, Zhou, Shuang, Yu, Shuiping, Zhou, Shunfeng, Zheng, Size, Wang, T., Pei, Tian, Yuan, Tian, Sun, Tianyu, Xiao, W. L., Zeng, Wangding, An, Wei, Liu, Wen, Liang, Wenfeng, Gao, Wenjun, Zhang, Wentao, Li, X. Q., Jin, Xiangyue, Wang, Xianzu, Bi, Xiao, Liu, Xiaodong, Wang, Xiaohan, Shen, Xiaojin, Chen, Xiaokang, Chen, Xiaosha, Nie, Xiaotao, Sun, Xiaowen, Wang, Xiaoxiang, Liu, Xin, Xie, Xin, Yu, Xingkai, Song, Xinnan, Zhou, Xinyi, Yang, Xinyu, Lu, Xuan, Su, Xuecheng, Wu, Y., Li, Y. K., Wei, Y. X., Zhu, Y. X., Xu, Yanhong, Huang, Yanping, Li, Yao, Zhao, Yao, Sun, Yaofeng, Li, Yaohui, Wang, Yaohui, Zheng, Yi, Zhang, Yichao, Xiong, Yiliang, Zhao, Yilong, He, Ying, Tang, Ying, Piao, Yishi, Dong, Yixin, Tan, Yixuan, Liu, Yiyuan, Wang, Yongji, Guo, Yongqiang, Zhu, Yuchen, Wang, Yuduan, Zou, Yuheng, Zha, Yukun, Ma, Yunxian, Yan, Yuting, You, Yuxiang, Liu, Yuxuan, Ren, Z. Z., Ren, Zehui, Sha, Zhangli, Fu, Zhe, Huang, Zhen, Zhang, Zhen, Xie, Zhenda, Hao, Zhewen, Shao, Zhihong, Wen, Zhiniu, Xu, Zhipeng, Zhang, Zhongyu, Li, Zhuoshu, Wang, Zihan, Gu, Zihui, Li, Zilin, Xie, Ziwei
We present DeepSeek-V2, a strong Mixture-of-Experts (MoE) language model characterized by economical training and efficient inference. It comprises 236B total parameters, of which 21B are activated for each token, and supports a context length of 128
Externí odkaz:
http://arxiv.org/abs/2405.04434
Autor:
Xie, Pengwei, Chen, Rui, Chen, Siang, Qin, Yuzhe, Xiang, Fanbo, Sun, Tianyu, Xu, Jing, Wang, Guijin, Su, Hao
Manipulating unseen articulated objects through visual feedback is a critical but challenging task for real robots. Existing learning-based solutions mainly focus on visual affordance learning or other pre-trained visual models to guide manipulation
Externí odkaz:
http://arxiv.org/abs/2404.17302
Autor:
Jiang, Yangqian1,2 (AUTHOR), Sun, Tianyu1,3 (AUTHOR), Jiang, Yue1,3 (AUTHOR), Wang, Xiaoyan4 (AUTHOR), Xi, Qi4 (AUTHOR), Dou, Yuanyan1,3 (AUTHOR), Lv, Hong1,3,5 (AUTHOR), Peng, Yuting1,2 (AUTHOR), Xiao, Shuxin1,2 (AUTHOR), Xu, Xin1,2 (AUTHOR), Liu, Cong1,3 (AUTHOR), Xu, Bo1,3 (AUTHOR), Han, Xiumei1,3 (AUTHOR), Ma, Hongxia1,3,5 (AUTHOR), Hu, Zhibin1,3,5 (AUTHOR), Shi, Zhonghua6 (AUTHOR) jesse_1982@163.com, Du, Jiangbo1,3,5 (AUTHOR) dujiangbo@njmu.edu.cn, Lin, Yuan1,2,5 (AUTHOR) yuanlin@njmu.edu.cn
Publikováno v:
Environmental Health: A Global Access Science Source. 10/12/2024, Vol. 23 Issue 1, p1-12. 12p.
Autor:
Yu, Huiwang, Yu, Weilin, Wang, Bin, Sun, Tianyu, Yang, Yong, Song, Min, Guo, Baisong, Li, Wei, Yu, Zhentao
Publikováno v:
In Materials Science & Engineering A November 2024 916
Publikováno v:
In Food Bioscience October 2024 61