Zobrazeno 1 - 10
of 122
pro vyhledávání: '"Lin Yonghua"'
Autor:
Wang, Xinlong, Zhang, Xiaosong, Luo, Zhengxiong, Sun, Quan, Cui, Yufeng, Wang, Jinsheng, Zhang, Fan, Wang, Yueze, Li, Zhen, Yu, Qiying, Zhao, Yingli, Ao, Yulong, Min, Xuebin, Li, Tao, Wu, Boya, Zhao, Bo, Zhang, Bowen, Wang, Liangdong, Liu, Guang, He, Zheqi, Yang, Xi, Liu, Jingjing, Lin, Yonghua, Huang, Tiejun, Wang, Zhongyuan
While next-token prediction is considered a promising path towards artificial general intelligence, it has struggled to excel in multimodal tasks, which are still dominated by diffusion models (e.g., Stable Diffusion) and compositional approaches (e.
Externí odkaz:
http://arxiv.org/abs/2409.18869
Autor:
Zhang, Bo-Wen, Wang, Liangdong, Yuan, Ye, Li, Jijie, Gu, Shuhao, Zhao, Mengdi, Wu, Xinya, Liu, Guang, Wu, Chengwei, Zhao, Hanyu, Du, Li, Ju, Yiming, Ma, Quanyue, Ao, Yulong, Zhao, Yingli, Zhu, Songhe, Cao, Zhou, Liang, Dong, Lin, Yonghua, Zhang, Ming, Wang, Shunfei, Zhou, Yanxin, Ye, Min, Chen, Xuekai, Yu, Xinyang, Huang, Xiangjun, Yang, Jian
In recent years, with the rapid application of large language models across various fields, the scale of these models has gradually increased, and the resources required for their pre-training have grown exponentially. Training an LLM from scratch wi
Externí odkaz:
http://arxiv.org/abs/2408.06567
Recently, both closed-source LLMs and open-source communities have made significant strides, outperforming humans in various general domains. However, their performance in specific professional fields such as medicine, especially within the open-sour
Externí odkaz:
http://arxiv.org/abs/2406.12182
Publikováno v:
In Information Sciences July 2024 675
Publikováno v:
In Information Sciences March 2024 662
Texture exists in lots of the products, such as wood, beef and compression tea. These abundant and stochastic texture patterns are significantly different between any two products. Unlike the traditional digital ID tracking, in this paper, we propose
Externí odkaz:
http://arxiv.org/abs/2104.11548
Autor:
Zhang, Xiaofan, Ye, Hanchen, Wang, Junsong, Lin, Yonghua, Xiong, Jinjun, Hwu, Wen-mei, Chen, Deming
Existing FPGA-based DNN accelerators typically fall into two design paradigms. Either they adopt a generic reusable architecture to support different DNN networks but leave some performance and efficiency on the table because of the sacrifice of desi
Externí odkaz:
http://arxiv.org/abs/2008.12745
Neural network accelerators with low latency and low energy consumption are desirable for edge computing. To create such accelerators, we propose a design flow for accelerating the extremely low bit-width neural network (ELB-NN) in embedded FPGAs wit
Externí odkaz:
http://arxiv.org/abs/1808.04311
Autor:
Yuan, Zeming, Li, Xiaoming, Li, Tao, Zhai, Tingting, Lin, Yonghua, Feng, Dianchen, Zhang, Yanghuan
Publikováno v:
In Journal of Energy Storage February 2022 46
Publikováno v:
IUBMB Life; Aug2024, Vol. 76 Issue 8, p534-547, 14p