Zobrazeno 1 - 10
of 576
pro vyhledávání: '"Hu, Ju"'
Autor:
Cao, Junli, Goel, Vidit, Wang, Chaoyang, Kag, Anil, Hu, Ju, Korolev, Sergei, Jiang, Chenfanfu, Tulyakov, Sergey, Ren, Jian
Recent approaches representing 3D objects and scenes using Gaussian splats show increased rendering speed across a variety of platforms and devices. While rendering such representations is indeed extremely efficient, storing and transmitting them is
Externí odkaz:
http://arxiv.org/abs/2406.19434
Autor:
Sui, Yang, Li, Yanyu, Kag, Anil, Idelbayev, Yerlan, Cao, Junli, Hu, Ju, Sagar, Dhritiman, Yuan, Bo, Tulyakov, Sergey, Ren, Jian
Diffusion-based image generation models have achieved great success in recent years by showing the capability of synthesizing high-quality content. However, these models contain a huge number of parameters, resulting in a significantly large model si
Externí odkaz:
http://arxiv.org/abs/2406.04333
Autor:
Li, Yanyu, Liu, Xian, Kag, Anil, Hu, Ju, Idelbayev, Yerlan, Sagar, Dhritiman, Wang, Yanzhi, Tulyakov, Sergey, Ren, Jian
Diffusion-based text-to-image generative models, e.g., Stable Diffusion, have revolutionized the field of content generation, enabling significant advancements in areas like image editing and video synthesis. Despite their formidable capabilities, th
Externí odkaz:
http://arxiv.org/abs/2403.18978
Autor:
Gupta, Aarush, Cao, Junli, Wang, Chaoyang, Hu, Ju, Tulyakov, Sergey, Ren, Jian, Jeni, László A
Publikováno v:
NeurIPS 2023
Real-time novel-view image synthesis on mobile devices is prohibitive due to the limited computational power and storage. Using volumetric rendering methods, such as NeRF and its derivatives, on mobile devices is not suitable due to the high computat
Externí odkaz:
http://arxiv.org/abs/2310.16832
Autor:
Li, Yanyu, Wang, Huan, Jin, Qing, Hu, Ju, Chemerys, Pavlo, Fu, Yun, Wang, Yanzhi, Tulyakov, Sergey, Ren, Jian
Text-to-image diffusion models can create stunning images from natural language descriptions that rival the work of professional artists and photographers. However, these models are large, with complex network architectures and tens of denoising iter
Externí odkaz:
http://arxiv.org/abs/2306.00980
Autor:
Cao, Junli, Wang, Huan, Chemerys, Pavlo, Shakhrai, Vladislav, Hu, Ju, Fu, Yun, Makoviichuk, Denys, Tulyakov, Sergey, Ren, Jian
Recent efforts in Neural Rendering Fields (NeRF) have shown impressive results on novel view synthesis by utilizing implicit neural representation to represent 3D scenes. Due to the process of volumetric rendering, the inference speed for NeRF is ext
Externí odkaz:
http://arxiv.org/abs/2212.08057
Autor:
Li, Yanyu, Hu, Ju, Wen, Yang, Evangelidis, Georgios, Salahi, Kamyar, Wang, Yanzhi, Tulyakov, Sergey, Ren, Jian
With the success of Vision Transformers (ViTs) in computer vision tasks, recent arts try to optimize the performance and complexity of ViTs to enable efficient deployment on mobile devices. Multiple approaches are proposed to accelerate attention mec
Externí odkaz:
http://arxiv.org/abs/2212.08059
Autor:
Hu, Ju-Chuan1 (AUTHOR) shelleycain525@gmail.com, Tzeng, Hong-Tai2 (AUTHOR) htay11@cgmh.org.tw, Lee, Wei-Chia3 (AUTHOR) chuang82@ms26.hinet.net, Li, Jian-Ri1,4,5 (AUTHOR) fisherfishli@yahoo.com.tw, Chuang, Yao-Chi3 (AUTHOR)
Publikováno v:
International Journal of Molecular Sciences. Aug2024, Vol. 25 Issue 15, p8015. 18p.
Autor:
Li, Yanyu, Yuan, Geng, Wen, Yang, Hu, Ju, Evangelidis, Georgios, Tulyakov, Sergey, Wang, Yanzhi, Ren, Jian
Vision Transformers (ViT) have shown rapid progress in computer vision tasks, achieving promising results on various benchmarks. However, due to the massive number of parameters and model design, \textit{e.g.}, attention mechanism, ViT-based models a
Externí odkaz:
http://arxiv.org/abs/2206.01191
Autor:
Huang, Jingjun, Liao, Shujia, Su, Yuqing, Li, Meihui, Hu, Ju, Han, Linqiang, Jiang, Yanlin, Yang, Mu zhi, Zhang, Yan, Li, Shuisheng, Zhang, Yong
Publikováno v:
In Aquaculture Reports August 2024 37