• Acta Photonica Sinica
  • Vol. 53, Issue 4, 0415001 (2024)
Yufeng XU, Yuanzhi LIU, Minghui QIN, Hui ZHAO, and Wei TAO*
Author Affiliations
  • School of Sensing Science and Engineering,School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University,Shanghai 200240,China
  • show less
    DOI: 10.3788/gzxb20245304.0415001 Cite this Article
    Yufeng XU, Yuanzhi LIU, Minghui QIN, Hui ZHAO, Wei TAO. Global Low Bias Visual/inertial/weak-positional-aided Fusion Navigation System[J]. Acta Photonica Sinica, 2024, 53(4): 0415001 Copy Citation Text show less

    Abstract

    In recent years, the rapid development of mobile robots, autonomous driving, drones and other technologies has increased the demand for high-precision navigation in complex environments. Visual-inertial odometry has been widely used in the field of robot navigation because of its low cost and high practicability. However, due to its relative measurement principle, the cumulative error can increase significantly during long-term operation of the system. To solve this problem, a global low-bias visual/inertial/weak-positional-aided fusion navigation system is proposed. The system provides optional solutions to integrate several unbiased positioning information such as Global Navigation Satellite System(GNSS)satellite navigation original information, US base station ranging information, and visual target positioning auxiliary information, fully combining the advantages of global information and visual-inertial odometry. Thus, high precision, high continuity, high real-time, indoor and outdoor integration of low-bias global navigation results are obtained. The main frame of the system designed in this paper is a factor graph model, based on the visual-inertial odometry, which ensures the high frequency pose output and seamless indoor/outdoor switching of the system. The visual-inertial residual factor is defined based on the visual reprojection model and IMU pre-integration model. For different application scenarios, GNSS constraints and ultrasonic constraints are introduced in the form of optional factors and state quantities, and GNSS residuals and ultrasonic residuals are defined. Among them, the GNSS factor constructs residuals with pseudorange measurements and Doppler shift information. The ultrasonic factor constructs the residual from the ultrasonic positioning results and the ultrasonic base station distance measurement. At the same time, ArUco visual information correction optional module is provided. Based on ArUco marker position prior information and ArUco target recognition algorithm, ArUco assisted global pose optimization method is defined. A wheeled robot platform equipped with multiple sensors, such as cameras and LiDAR, was built to collect data and conduct algorithm testing in the underground parking lot and the connected above-ground architectural complex area. The experimental scene was scanned by laser scanner and the map truth value was generated. The VIO assisted LiDAR point cloud and map prior were used for point cloud registration to obtain accurate track truth value. The positioning and navigation performance of three different methods, namely, proposed method, VINS-Mono method and ORB-SLAM3 method, were tested respectively in three scenarios: indoor, in-outdoor during the day and in-outdoor at night. The test results show that in the three scenarios, the RPE and ATE evaluation results of the proposed method are superior to the other methods. Especially under harsh conditions, the ATE RMSE of the proposed method is 3.495 m in in-outdoor scenes at night, which is significantly better than VINS-Mono (10.77 m) and ORB-SLAM3 (15.02 m). In addition, the experiment also tests the comparison between the proposed method using VIO+ArUco module and the VINS-Mono method with the loopback detection function enabled, proving that the introduction of ArUco module is of great significance for eliminating global cumulative errors and improving global navigation accuracy, and can solve the problem of loopback detection failure to a certain extent. In general, this paper presents an extensible multi-modal information weak-aided visual inertial navigation system. The experimental results show that the proposed system has excellent global positioning accuracy and universality in different scenes. In future work, the range of multi-modal information can be further expanded to explore the integration scheme of sensor information such as LiDAR, magnetometer and other sensing characteristics.
    Yufeng XU, Yuanzhi LIU, Minghui QIN, Hui ZHAO, Wei TAO. Global Low Bias Visual/inertial/weak-positional-aided Fusion Navigation System[J]. Acta Photonica Sinica, 2024, 53(4): 0415001
    Download Citation