AUTHOREA
Log in
Sign Up
Browse Preprints
LOG IN
SIGN UP
Essential Site Maintenance
: Authorea-powered sites will be updated circa 15:00-17:00 Eastern on Tuesday 5 November.
There should be no interruption to normal services, but please contact us at
[email protected]
in case you face any issues.
Shengming Li
Public Documents
1
Depth predictable VSLAM for a small-scale robotic rat in dynamic environments
Yulai Zhang
and 5 more
July 16, 2024
The ability to perceive environments supports an important foundation for our self-developed robotic rat to improve kinematic performance and application potential. However, the most existing visual perception of quadruped robots suffers from poor perception accuracy in real-world dynamic environments. To mitigate the problem of erroneous data correlation, which is the main cause of low accuracy, the work presents an approach that combines leg odometry (LO) and IMU measurements with VSLAM to provide robust localization capabilities for small-scale quadruped robots in challenging scenarios by estimating the depth map and removing moving objects in dynamic environments. The method contains a depth estimation network with higher accuracy by combining the attention mechanism in the Transformer with the RAFT-Stereo depth estimation algorithm. Besides, the method combines target identification and segmentation with 3D projection of feature points to remove moving objects in dynamic environments. In addition, LO and IMU data are fused in the modified framework of ORB-SLAM3 to achieve highly accurate localization. The proposed approach is robust against erroneous data correlation due to moving objects and wobbles of quadruped robots. Evaluation results on multiple stages demonstrate that the system performs competitively in dynamic environments, outperforming existing visual perception methods in both public benchmarks and our costumed small-scale robotic rat.