Abstract:
|
ing accurate, high-rate pose estimates from
proprioceptive and/or exteroceptive measurements is the first step in the development of navigation
algorithms for agile mobile robots such as Unmanned Aerial Vehicles (UAVs). In this paper, we
propose a decoupled multi-sensor fusion approach that allows the combination of generic 6D
visual-inertial (VI) odometry poses and 3D globally referenced positions to infer the global 6D
pose of the robot in real-time. Our approach casts the fusion as a real-time alignment problem
between the local base frame of the VI odometry and the global base frame. The quasi-constant
alignment transformation that relates these coordinate systems is continuously updated employing
graph- based optimization with a sliding window. We evaluate the presented pose estimation method
on both simulated data and large outdoor experiments using a small UAV that is capable to run our
system onboard. Results are compared against different state-of-the-art sensor fusion frameworks,
revealing that the proposed approach is substantially more accurate than other decoupled fusion
strategies. We also demonstrate comparable results in relation with a finely tuned Extended Kalman
Filter that fuses visual, inertial and GPS measurements in a coupled way and show that our approach
is generic enough to deal with different input sources in
ner, as well as
able to run in real-time. |