Zobrazeno 1 - 10
of 19
pro vyhledávání: '"Michal Perdoch"'
Autor:
Yaroslava Lochman, Kostiantyn Liepieshov, Jianhui Chen, Michal Perdoch, Christopher Zach, James Pritts
Publikováno v:
2021 IEEE/CVF International Conference on Computer Vision (ICCV).
Existing calibration methods occasionally fail for large field-of-view cameras due to the non-linearity of the underlying problem and the lack of good initial values for all parameters of the used camera model. This might occur because a simpler proj
Autor:
Tomas Simon, Hypes Alexander Trenor, Yaser Sheikh, Shih-En Wei, Hernan Badino, Adam W. Harley, Stephen Lombardi, Dawei Wang, Jason Saragih, Michal Perdoch
Publikováno v:
ACM Transactions on Graphics. 38:1-16
A key promise of Virtual Reality (VR) is the possibility of remote social interaction that is more immersive than any prior telecommunication media. However, existing social VR experiences are mediated by inauthentic digital representations of the us
Publikováno v:
ICCV
The recent proliferation of high resolution cameras presents an opportunity to achieve unprecedented levels of precision in visual 3D reconstruction. Yet the camera calibration pipeline, developed decades ago using checkerboards, has remained the de
Publikováno v:
WACV
© 2017 IEEE. We propose a new approach for detecting repeated patterns on a grid in a single image. To do so, we detect repetitions in the space of pre-Trained deep CNN filter responses at all layer levels. These encode features at several conceptua
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::d3e0582c5e6184d3bbe3abb15024aae3
https://hdl.handle.net/20.500.11850/176177
https://hdl.handle.net/20.500.11850/176177
Publikováno v:
International Journal of Computer Vision. 103:163-175
A novel similarity measure for bag-of-words type large scale image retrieval is presented. The similarity function is learned in an unsupervised manner, requires no extra space over the standard bag-of-words method and is more discriminative than bot
Publikováno v:
2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
CVPR
CVPR
© 2016 IEEE. This work proposes a progressive patch based multiview stereo algorithm able to deliver a dense point cloud at any time. This enables an immediate feedback on the reconstruction process in a user centric scenario. With increasing proces
Publikováno v:
CVPR
In many retrieval, object recognition, and wide-baseline stereo methods, correspondences of interest points (distinguished regions) are commonly established by matching compact descriptors such as SIFTs. We show that a subsequent cosegmentation proce
Autor:
Peter Rander, Jonathan K. Chang, Michal Perdoch, Herman Herman, David McAllister Bradley, Anthony Stentz
Publikováno v:
IROS
A key challenge of developing robots that work closely with people is creating a user interface that allows a user to communicate complex instructions to a robot quickly and easily. We consider a walking logistics support robot, which is designed to
Publikováno v:
BMVC
We have presented a new problem -- the wide multiple baseline stereo (WxBS) -- which considers matching of images that simultaneously differ in more than one image acquisition factor such as viewpoint, illumination, sensor type or where object appear
Publikováno v:
Pattern Recognition
Pattern Recognition, Elsevier, 2011, 44 (7), pp.1514-1527. ⟨10.1016/j.patcog.2011.01.005⟩
Pattern Recognition, Elsevier, 2011, 44 (7), pp.1514-1527. 〈10.1016/j.patcog.2011.01.005〉
Pattern Recognition, Elsevier, 2011, 44 (7), pp.1514-1527. ⟨10.1016/j.patcog.2011.01.005⟩
Pattern Recognition, Elsevier, 2011, 44 (7), pp.1514-1527. 〈10.1016/j.patcog.2011.01.005〉
We propose an approach to curvilinear and wiry object detection and matching based on a new curvilinear region detector (CRD) and a shape context-like descriptor (COH). Standard methods for local patch detection and description are not directly appli
Externí odkaz:
https://explore.openaire.eu/search/publication?articleId=doi_dedup___::0102f9f90a6452c0e1b82f5e2604fffa
https://hal.archives-ouvertes.fr/hal-00643610
https://hal.archives-ouvertes.fr/hal-00643610