Zobrazeno 1 - 10
of 351
pro vyhledávání: '"Zhou Yukun"'
Publikováno v:
Open Geosciences, Vol 15, Iss 1, Pp 48-53 (2023)
The risk factor of the receiving water body is one of the important factors that affect the self-purification ability of the water body. Analyzing the concentration and removal rate of the target substances such as suspended solid (SS), chemical oxyg
Externí odkaz:
https://doaj.org/article/b4348dec1269408fbff6aed9fdabc064
Autor:
Zheng, Zangwei, Peng, Xiangyu, Yang, Tianji, Shen, Chenhui, Li, Shenggui, Liu, Hongxin, Zhou, Yukun, Li, Tianyi, You, Yang
Vision and language are the two foundational senses for humans, and they build up our cognitive ability and intelligence. While significant breakthroughs have been made in AI language ability, artificial visual intelligence, especially the ability to
Externí odkaz:
http://arxiv.org/abs/2412.20404
Autor:
Xu, Moucheng, Zhou, Yukun, Goodwin-Allcock, Tobias, Firoozabadi, Kimia, Jacob, Joseph, Alexander, Daniel C., Slator, Paddy J.
We introduce and demonstrate a new paradigm for quantitative parameter mapping in MRI. Parameter mapping techniques, such as diffusion MRI and quantitative MRI, have the potential to robustly and repeatably measure biologically-relevant tissue maps t
Externí odkaz:
http://arxiv.org/abs/2411.10772
Autor:
Zhao, Xinyu, Sun, Guoheng, Cai, Ruisi, Zhou, Yukun, Li, Pingzhi, Wang, Peihao, Tan, Bowen, He, Yexiao, Chen, Li, Liang, Yi, Chen, Beidi, Yuan, Binhang, Wang, Hongyi, Li, Ang, Wang, Zhangyang, Chen, Tianlong
As Large Language Models (LLMs) excel across tasks and specialized domains, scaling LLMs based on existing models has garnered significant attention, which faces the challenge of decreasing performance when combining disparate models. Various techniq
Externí odkaz:
http://arxiv.org/abs/2410.05357
Long-tailed learning is considered to be an extremely challenging problem in data imbalance learning. It aims to train well-generalized models from a large number of images that follow a long-tailed class distribution. In the medical field, many diag
Externí odkaz:
http://arxiv.org/abs/2410.02010
Autor:
Zoellin, Jay, Merk, Colin, Buob, Mischa, Saad, Amr, Giesser, Samuel, Spitznagel, Tahm, Turgut, Ferhat, Santos, Rui, Zhou, Yukun, Wagner, Sigfried, Keane, Pearse A., Tham, Yih Chung, DeBuc, Delia Cabrera, Becker, Matthias D., Somfai, Gabor M.
Integrating deep learning into medical imaging is poised to greatly advance diagnostic methods but it faces challenges with generalizability. Foundation models, based on self-supervised learning, address these issues and improve data efficiency. Natu
Externí odkaz:
http://arxiv.org/abs/2409.17332
Generative models have achieved remarkable success in image, video, and text domains. Inspired by this, researchers have explored utilizing generative models to generate neural network parameters. However, these efforts have been limited by the param
Externí odkaz:
http://arxiv.org/abs/2408.01415
Autor:
Wang, Meng, Lin, Tian, Lin, Aidi, Yu, Kai, Peng, Yuanyuan, Wang, Lianyu, Chen, Cheng, Zou, Ke, Liang, Huiyu, Chen, Man, Yao, Xue, Zhang, Meiqin, Huang, Binwei, Zheng, Chaoxin, Zhang, Peixin, Chen, Wei, Luo, Yilong, Chen, Yifan, Xia, Honghe, Shi, Tingkun, Zhang, Qi, Guo, Jinming, Chen, Xiaolin, Wang, Jingcheng, Tham, Yih Chung, Liu, Dianbo, Wong, Wendy, Thakur, Sahil, Fenner, Beau, Fang, Danqi, Liu, Siying, Liu, Qingyun, Huang, Yuqiang, Zeng, Hongqiang, Meng, Yanda, Zhou, Yukun, Jiang, Zehua, Qiu, Minghui, Zhang, Changqing, Chen, Xinjian, Wang, Sophia Y, Lee, Cecilia S, Sobrin, Lucia, Cheung, Carol Y, Pang, Chi Pui, Keane, Pearse A, Cheng, Ching-Yu, Chen, Haoyu, Fu, Huazhu
Previous foundation models for retinal images were pre-trained with limited disease categories and knowledge base. Here we introduce RetiZero, a vision-language foundation model that leverages knowledge from over 400 fundus diseases. To RetiZero's pr
Externí odkaz:
http://arxiv.org/abs/2406.09317
Autor:
Qin, Ziheng, Xu, Zhaopan, Zhou, Yukun, Zheng, Zangwei, Cheng, Zebang, Tang, Hao, Shang, Lei, Sun, Baigui, Peng, Xiaojiang, Timofte, Radu, Yao, Hongxun, Wang, Kai, You, Yang
Deep learning benefits from the growing abundance of available data. Meanwhile, efficiently dealing with the growing data scale has become a challenge. Data publicly available are from different sources with various qualities, and it is impractical t
Externí odkaz:
http://arxiv.org/abs/2405.18347
Autor:
Wang, Kai, Shi, Mingjia, Zhou, Yukun, Li, Zekai, Yuan, Zhihang, Shang, Yuzhang, Peng, Xiaojiang, Zhang, Hanwang, You, Yang
Training diffusion models is always a computation-intensive task. In this paper, we introduce a novel speed-up method for diffusion model training, called, which is based on a closer look at time steps. Our key findings are: i) Time steps can be empi
Externí odkaz:
http://arxiv.org/abs/2405.17403