Optimized Navigation of Mobile Robots Based on Faster R-CNN in Wireless Sensor Network

Autor: Sevugan, Alagumuthukrishnan, Karthikeyan, Periyasami, Sarveshwaran, Velliangiri, Manoharan, Rajesh
Zdroj: International Journal of Sensors Wireless Communications and Control; 2022, Vol. 12 Issue: 6 p440-448, 9p
Abstrakt: Background: In recent years, deep learning techniques have dramatically enhanced mobile robot sensing, navigation, and reasoning. Due to the advancements in machine vision technology and algorithms, visual sensors have become increasingly crucial in mobile robot applications in recent years. However, due to the low computing efficiency of current neural network topologies and their limited adaptability to the requirements of robotic experimentation, there will still be gaps in implementing these techniques on real robots. It is worth noting that AI technologies are used to solve several difficulties in mobile robotics using visuals as the sole source of information or with additional sensors like lasers or GPS. Over the last few years, many works have already been proposed, resulting in a wide range of methods. They built a reliable environment model, calculated the position of the model, and managed the robot's mobility from one location to another. Objective: The proposed method aims to detect an object in the smart home and office using optimized, faster R-CNN and improve accuracy for different datasets. Methods: The proposed methodology uses a novel clustering technique based on faster R-CNN networks, a new and effective method for detecting groups of measurements with a continuous similarity. The resulting communities are coupled with the metric information given by the robot's distance estimation through an agglomerative hierarchical clustering algorithm. The proposed method optimizes ROI layers for generating the optimized features. Results: The proposed approach is tested on indoor and outdoor datasets, producing topological maps that aid semantic location. We show that the system successfully categorizes places when the robot returns to the same area, despite potential lighting variations. The developed method provides better accuracy than VGG-19 and RCNN methods. Conclusion: The findings were positive, indicating that accurate categorization can be accomplished even under varying illumination circumstances by adequately designing an area's semantic map. The Faster R-CNN model shows the lowest error rate among the three evaluated models.
Databáze: Supplemental Index