Zobrazeno 1 - 10
of 349
pro vyhledávání: '"DU Yifan"'
Publikováno v:
Frontiers in Energy Research, Vol 12 (2024)
Introduction: More than two billion tons of oil reserves have been discovered in the fractured-cavity type carbonate reservoirs in Tarim Basin, which is a key field for increasing reserves and production in the future. This type of oil reservoirs is
Externí odkaz:
https://doaj.org/article/facf522da212482481bb77509fbf9415
Autor:
TANG Zhuoyao, ZHENG Jinxing, LIU Haiyang, LU Yudong, DU Yifan, KE Maolin, WANG Luoqi, WU Meiqi, WU Tao, SHI Jiaming
Publikováno v:
He jishu, Vol 47, Iss 5, Pp 050009-050009 (2024)
BackgroundWith the development of human aerospace industry, it is necessary to develop propulsion systems suitable for different space mission scenarios. MegnetoPlasmaDynamic thruster (MPDT), which is similar to the principle of magnetic confinement
Externí odkaz:
https://doaj.org/article/2f83f9a5adf84eee8add9b7a5b8b6c16
Autor:
WANG Luoqi, ZHENG Jinxing, LIU Haiyang, LI Fei, MENG Dongdong, LU Yudong, DU Yifan, TANG Zhuoyao, WU Tao, SHI Jiaming
Publikováno v:
He jishu, Vol 47, Iss 5, Pp 050010-050010 (2024)
BackgroundElectric propulsion systems, compared to traditional chemical propulsion, offer longer operational lifespans and lower fuel consumption in space missions, garnering significant attention in recent years. However, for high-power Hall effect
Externí odkaz:
https://doaj.org/article/3e70e9ced42947a6a3a07f971bc62229
Autor:
Du, Yifan, Sua, Yong Meng, Kumar, Santosh, Zhang, Jiuyi, Li, Xiangzhi, Hu, Yongxiang, Ghuman, Parminder, Huang, Yuping
We demonstrate a chip-integrated emission spectroscope capable of retrieving the temperature of the light sources. It consists of a single photon detector with low dark counts and a sweeping on-chip filter with 2 pm spectral resolution in the visible
Externí odkaz:
http://arxiv.org/abs/2410.23966
Autor:
Du, Yifan, Huo, Yuqi, Zhou, Kun, Zhao, Zijia, Lu, Haoyu, Huang, Han, Zhao, Wayne Xin, Wang, Bingning, Chen, Weipeng, Wen, Ji-Rong
Video Multimodal Large Language Models (MLLMs) have shown remarkable capability of understanding the video semantics on various downstream tasks. Despite the advancements, there is still a lack of systematic research on visual context representation,
Externí odkaz:
http://arxiv.org/abs/2410.13694
Autor:
Du, Yifan, Zhou, Kun, Huo, Yuqi, Li, Yifan, Zhao, Wayne Xin, Lu, Haoyu, Zhao, Zijia, Wang, Bingning, Chen, Weipeng, Wen, Ji-Rong
With the rapid development of video Multimodal Large Language Models (MLLMs), numerous benchmarks have been proposed to assess their video understanding capability. However, due to the lack of rich events in the videos, these datasets may suffer from
Externí odkaz:
http://arxiv.org/abs/2406.14129
Autor:
Zhao, Zijia, Lu, Haoyu, Huo, Yuqi, Du, Yifan, Yue, Tongtian, Guo, Longteng, Wang, Bingning, Chen, Weipeng, Liu, Jing
Video understanding is a crucial next step for multimodal large language models (MLLMs). Various benchmarks are introduced for better evaluating the MLLMs. Nevertheless, current video benchmarks are still inefficient for evaluating video models durin
Externí odkaz:
http://arxiv.org/abs/2406.09367
Autor:
Du Yifan
Publikováno v:
SHS Web of Conferences, Vol 157, p 03014 (2023)
The “Lugou bridge Incident” broke out on July 7, 1937. In order to save the country from subjugation, the Kuomintang and the Communist Party carried out the second cooperation. This cooperation is non party cooperation, and there are great differ
Externí odkaz:
https://doaj.org/article/040ecb8ebba64c0eb5545088fba883f2
Autor:
Du, Yifan, Guo, Hangyu, Zhou, Kun, Zhao, Wayne Xin, Wang, Jinpeng, Wang, Chuyuan, Cai, Mingchen, Song, Ruihua, Wen, Ji-Rong
Visual instruction tuning is an essential approach to improving the zero-shot generalization capability of Multi-modal Large Language Models (MLLMs). A surge of visual instruction datasets with various focuses and characteristics have been proposed r
Externí odkaz:
http://arxiv.org/abs/2311.01487
In this paper, we propose a novel language model guided captioning approach, LAMOC, for knowledge-based visual question answering (VQA). Our approach employs the generated captions by a captioning model as the context of an answer prediction model, w
Externí odkaz:
http://arxiv.org/abs/2305.17006