Abstract: Aiming at the problems of target initialization and target tracking failure in images, a visual SLAM algorithm for point-line-plane feature fusion is proposed to improve the accuracy and robustness of automatic localization and map creation in mobile robots. Firstly, a suitable algorithm is selected to extract point features, line features, and planar features, respectively; secondly, a structural constraint model for feature fusion is constructed to build a point, line, and plane fusion visual odometry and a loopback detection module; finally, a structural constraint model is constructed for fusing point, line, and planar features, fusing the data information between frames, realizing the estimation of the camera poses, constructing a global consistency map, and realizing the back-end nonlinear optimization. Compare with the ORB SLAM and LSD SLAM methods, and verify the accuracy and effectiveness of the proposed method in this paper through the TUM dataset. The experimental results show that the plp SLAM method proposed in this paper reduces the average value of the root mean square error of the absolute trajectory by about 0.6 and 20, respectively, compared with the ORB SLAM and LSD SLAM methods, and is able to realize the motion trajectory in an unknown environment, which sufficiently verifies that the plp SLAM method proposed in this paper is feasible and effective.
Previous Article in event
Previous Article in session
Next Article in event
Next Article in session
Visual SLAM method for point, line and surface feature fusion
Published:
07 November 2023
by MDPI
in The 4th International Electronic Conference on Applied Sciences
session Computing and Artificial Intelligence
https://doi.org/10.3390/ASEC2023-15868
(registering DOI)
Abstract:
Keywords: Keywords: Feature points; Feature fusion; Positioning and mapping; Feature matching