Zobrazeno 1 - 5
of 5
pro vyhledávání: '"Deng, Juncan"'
The Diffusion Transformers Models (DiTs) have transitioned the network architecture from traditional UNets to transformers, demonstrating exceptional capabilities in image generation. Although DiTs have been widely applied to high-definition video ge
Externí odkaz:
http://arxiv.org/abs/2408.17131
Autor:
Wang, Zeyu, Lin, Jingyu, Qian, Yifei, Huang, Yi, Tian, Shicen, Chai, Bosong, Deng, Juncan, Yang, Qu, Du, Lan, Chen, Cunjian, Guo, Yufei, Huang, Kejie
Diffusion models have made significant strides in language-driven and layout-driven image generation. However, most diffusion models are limited to visible RGB image generation. In fact, human perception of the world is enriched by diverse viewpoints
Externí odkaz:
http://arxiv.org/abs/2407.15488
In this paper, we propose a robust edge-direct visual odometry (VO) based on CNN edge detection and Shi-Tomasi corner optimization. Four layers of pyramids were extracted from the image in the proposed method to reduce the motion error between frames
Externí odkaz:
http://arxiv.org/abs/2110.11064
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.
Publikováno v:
2021 29th European Signal Processing Conference (EUSIPCO)