Zobrazeno 1 - 10
of 46
pro vyhledávání: '"Gehrig, Mathias"'
Publikováno v:
European Conference on Computer Vision (ECCV 2024)
Visual Odometry (VO) is essential to downstream mobile robotics and augmented/virtual reality tasks. Despite recent advances, existing VO methods still rely on heuristic design choices that require several weeks of hyperparameter tuning by human expe
Externí odkaz:
http://arxiv.org/abs/2407.15626
Publikováno v:
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, 2024
Today, state-of-the-art deep neural networks that process event-camera data first convert a temporal window of events into dense, grid-like input representations. As such, they exhibit poor generalizability when deployed at higher inference frequenci
Externí odkaz:
http://arxiv.org/abs/2402.15584
Object detection with event cameras benefits from the sensor's low latency and high dynamic range. However, it is costly to fully label event streams for supervised training due to their high temporal resolution. To reduce this cost, we present LEOD,
Externí odkaz:
http://arxiv.org/abs/2311.17286
Publikováno v:
IEEE Winter Conference on Applications of Computer Vision (WACV 2024)
Vision Transformers (ViTs) have shown impressive performance in computer vision, but their high computational cost, quadratic in the number of tokens, limits their adoption in computation-constrained applications. However, this large number of tokens
Externí odkaz:
http://arxiv.org/abs/2306.07050
Today, state-of-the-art deep neural networks that process events first convert them into dense, grid-like input representations before using an off-the-shelf network. However, selecting the appropriate representation for the task traditionally requir
Externí odkaz:
http://arxiv.org/abs/2304.13455
Autor:
Schnider, Yannick, Wozniak, Stanislaw, Gehrig, Mathias, Lecomte, Jules, von Arnim, Axel, Benini, Luca, Scaramuzza, Davide, Pantazi, Angeliki
Optical flow provides information on relative motion that is an important component in many computer vision pipelines. Neural networks provide high accuracy optical flow, yet their complexity is often prohibitive for application at the edge or in rob
Externí odkaz:
http://arxiv.org/abs/2304.07139
Publikováno v:
IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, 2024
Spiking Neural Networks (SNN) are a class of bio-inspired neural networks that promise to bring low-power and low-latency inference to edge devices through asynchronous and sparse processing. However, being temporal models, SNNs depend heavily on exp
Externí odkaz:
http://arxiv.org/abs/2303.14176
Autor:
Gehrig, Mathias, Scaramuzza, Davide
Publikováno v:
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, 2023
We present Recurrent Vision Transformers (RVTs), a novel backbone for object detection with event cameras. Event cameras provide visual information with sub-millisecond latency at a high-dynamic range and with strong robustness against motion blur. T
Externí odkaz:
http://arxiv.org/abs/2212.05598
Publikováno v:
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, 2023
Because of their high temporal resolution, increased resilience to motion blur, and very sparse output, event cameras have been shown to be ideal for low-latency and low-bandwidth feature tracking, even in challenging scenarios. Existing feature trac
Externí odkaz:
http://arxiv.org/abs/2211.12826
Publikováno v:
IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2024
We present a method for estimating dense continuous-time optical flow from event data. Traditional dense optical flow methods compute the pixel displacement between two images. Due to missing information, these approaches cannot recover the pixel tra
Externí odkaz:
http://arxiv.org/abs/2203.13674