Zobrazeno 1 - 10
of 889
pro vyhledávání: '"Weerakoon, P."'
Autor:
Elnoor, Mohamed, Weerakoon, Kasun, Seneviratne, Gershom, Xian, Ruiqi, Guan, Tianrui, Jaffar, Mohamed Khalid M, Rajagopal, Vignesh, Manocha, Dinesh
We present a novel autonomous robot navigation algorithm for outdoor environments that is capable of handling diverse terrain traversability conditions. Our approach, VLM-GroNav, uses vision-language models (VLMs) and integrates them with physical gr
Externí odkaz:
http://arxiv.org/abs/2409.20445
Autor:
Seneviratne, Gershom, Weerakoon, Kasun, Elnoor, Mohamed, Rajgopal, Vignesh, Varatharajan, Harshavarthan, Jaffar, Mohamed Khalid M, Pusey, Jason, Manocha, Dinesh
We present CROSS-GAiT, a novel algorithm for quadruped robots that uses Cross Attention to fuse terrain representations derived from visual and time-series inputs, including linear accelerations, angular velocities, and joint efforts. These fused rep
Externí odkaz:
http://arxiv.org/abs/2409.17262
Autor:
Weerakoon, Kasun, Elnoor, Mohamed, Seneviratne, Gershom, Rajagopal, Vignesh, Arul, Senthil Hariharan, Liang, Jing, Jaffar, Mohamed Khalid M, Manocha, Dinesh
We present BehAV, a novel approach for autonomous robot navigation in outdoor scenes guided by human instructions and leveraging Vision Language Models (VLMs). Our method interprets human commands using a Large Language Model (LLM) and categorizes th
Externí odkaz:
http://arxiv.org/abs/2409.16484
We present TOPGN, a novel method for real-time transparent obstacle detection for robot navigation in unknown environments. We use a multi-layer 2D grid map representation obtained by summing the intensities of lidar point clouds that lie in multiple
Externí odkaz:
http://arxiv.org/abs/2408.05608
Autor:
Sathyamoorthy, Adarsh Jagan, Weerakoon, Kasun, Elnoor, Mohamed, Zore, Anuj, Ichter, Brian, Xia, Fei, Tan, Jie, Yu, Wenhao, Manocha, Dinesh
We present ConVOI, a novel method for autonomous robot navigation in real-world indoor and outdoor environments using Vision Language Models (VLMs). We employ VLMs in two ways: first, we leverage their zero-shot image classification capability to ide
Externí odkaz:
http://arxiv.org/abs/2403.15637
Autor:
Elnoor, Mohamed, Weerakoon, Kasun, Sathyamoorthy, Adarsh Jagan, Guan, Tianrui, Rajagopal, Vignesh, Manocha, Dinesh
We present AMCO, a novel navigation method for quadruped robots that adaptively combines vision-based and proprioception-based perception capabilities. Our approach uses three cost maps: general knowledge map; traversability history map; and current
Externí odkaz:
http://arxiv.org/abs/2403.13235
We present a novel system, AdVENTR for autonomous robot navigation in unstructured outdoor environments that consist of uneven and vegetated terrains. Our approach is general and can enable both wheeled and legged robots to handle outdoor terrain com
Externí odkaz:
http://arxiv.org/abs/2311.08740
We present VAPOR, a novel method for autonomous legged robot navigation in unstructured, densely vegetated outdoor environments using offline Reinforcement Learning (RL). Our method trains a novel RL policy using an actor-critic network and arbitrary
Externí odkaz:
http://arxiv.org/abs/2309.07832
We present Multi-Layer Intensity Map, a novel 3D object representation for robot perception and autonomous navigation. Intensity maps consist of multiple stacked layers of 2D grid maps each derived from reflected point cloud intensities corresponding
Externí odkaz:
http://arxiv.org/abs/2309.07014
ProNav: Proprioceptive Traversability Estimation for Legged Robot Navigation in Outdoor Environments
We propose a novel method, ProNav, which uses proprioceptive signals for traversability estimation in challenging outdoor terrains for autonomous legged robot navigation. Our approach uses sensor data from a legged robot's joint encoders, force, and
Externí odkaz:
http://arxiv.org/abs/2307.09754