Zobrazeno 1 - 7
of 7
pro vyhledávání: '"Xu, Yinshuang"'
Autor:
Xu, Yinshuang, Chen, Dian, Liu, Katherine, Zakharov, Sergey, Ambrus, Rares, Daniilidis, Kostas, Guizilini, Vitor
Incorporating inductive bias by embedding geometric entities (such as rays) as input has proven successful in multi-view learning. However, the methods adopting this technique typically lack equivariance, which is crucial for effective 3D learning. E
Externí odkaz:
http://arxiv.org/abs/2411.07326
Autor:
Jayanth, Royina Karegoudra, Xu, Yinshuang, Wang, Ziyun, Chatzipantazis, Evangelos, Gehrig, Daniel, Daniilidis, Kostas
Neural networks are seeing rapid adoption in purely inertial odometry, where accelerometer and gyroscope measurements from commodity inertial measurement units (IMU) are used to regress displacements and associated uncertainties. They can learn infor
Externí odkaz:
http://arxiv.org/abs/2408.06321
3D reconstruction and novel view rendering can greatly benefit from geometric priors when the input views are not sufficient in terms of coverage and inter-view baselines. Deep learning of geometric priors from 2D images often requires each image to
Externí odkaz:
http://arxiv.org/abs/2212.14871
We introduce a unified framework for group equivariant networks on homogeneous spaces derived from a Fourier perspective. We consider tensor-valued feature fields, before and after a convolutional layer. We present a unified derivation of kernels via
Externí odkaz:
http://arxiv.org/abs/2206.08362
Autor:
Zhang, Lingzhi, Wang, Jiancong, Xu, Yinshuang, Min, Jie, Wen, Tarmily, Gee, James C., Shi, Jianbo
We propose an image synthesis approach that provides stratified navigation in the latent code space. With a tiny amount of partial or very low-resolution image, our approach can consistently out-perform state-of-the-art counterparts in terms of gener
Externí odkaz:
http://arxiv.org/abs/2006.02038
Several popular approaches to 3D vision tasks process multiple views of the input independently with deep neural networks pre-trained on natural images, achieving view permutation invariance through a single round of pooling over all views. We argue
Externí odkaz:
http://arxiv.org/abs/1904.00993
Akademický článek
Tento výsledek nelze pro nepřihlášené uživatele zobrazit.
K zobrazení výsledku je třeba se přihlásit.
K zobrazení výsledku je třeba se přihlásit.