Skip to content

How it Works

kostas-alexis edited this page Jun 6, 2017 · 1 revision

How it Works

This section aims to provide a brief overview of the basic functional principles of the RHEM planner. Detailed documentation of the work is available at:

@inproceedings{papachristos2017uncertainty,
  title={Uncertainty-aware Receding Horizon Exploration and Mapping Using Aerial Robots},
  author={Papachristos, Christos and Khattak, Shehryar and Alexis, Kostas},
  booktitle={2017 IEEE International Conference on Robotics and Automation (ICRA)},
  pages={to appear},
  year={2017},
  organization={IEEE}
}

Considered Localization Process

The proposed planner considers the existence of an onboard localization process that estimates the robot pose and tracked landmarks in the environment. Indicatively, that can be a visual-inertial localization pipeline. In particular the following state vector is considered to be estimated:

enter image description here

where $l_p,l_s,l_f$ are appropriate dimensions for the robot pose, states (included observed biases) and tracked landmarks. Given the estimation of the robot pose, this is then expressed on the world frame and the relevant pose transformations from frame k-1 to k become available.

RHEM Planner Operation

The RHEM planner relies on a two--step, receding horizon, belief space--based approach. At first, in an online computed random tree, the algorithm identifies the branch that optimizes the amount of new space expected to be explored. The first viewpoint configuration of this branch is selected, but the path towards it is decided through a second planning step. Within that, a new tree is sampled, admissible branches arriving at the reference viewpoint are found and the robot belief about its state and the tracked landmakrs is propagated. As system state the concatenation of the robot states and tracked landmarks (visual features) is considered as in Equation (1). Then, the branch that minimizes the localization uncertainty, as factorized using the D-optimality (D-opt) of the pose and landmarks covariance is selected. The corresponding path is conducted by the robot and the process is iteratively repeated. The following Figure illustrates the basic steps of this planner.

2D representation of the two--steps uncertainty--aware exploration and mapping planner. The first planning layer samples the path with the maximum exploration gain. The viewpoint configuration of the first vertex of this path becomes the reference to the second planning layer. Then this step, samples admissible paths that arrive to this configuration, performs belief propagation along the tree edges, and selects the one that provides minimum uncertainty over the robot pose and tracked landmarks.

The planner assumes a volumetric, incrementally built map (caligraphic M) which is reconstructed using the robot's perception unit and further annotates every voxel in the map with a probability indicating how certain the mapping is about it being occupied. For the first planning step, a random tree (RRT) is sampled in the position-heading space and its vertices are annotated with the cummulative "exploration gain" (given the sensor model and the current map alongside with a "reobservation gain" expressing the importance of reobserving voxels with low mapping certainty. For the sampled state configuration ξ_k "robot configuration", the exploration gain at the vertex n_{k-1}^k is defined as:

enter image description here

where σ denoting the path from one sampled configuration to the other, c(σ) is the length of the path, and λ is a parameter to penalize long paths. At the end of this step, the path that maximally explores is identified and its first viewpoint configuration ξ_{RH} is returned.

Given ξ_{RH}, the second planning step ensures that the way to visit this "next-best-view" is through a path that minimizes the localization uncertainty of the robot. To achieve this task, it samples a new random tree ξ^M that searches multiple branches to arrive to ξ_{RH} and evaluates which one leads to minimized localization uncertainty. It is noted that the state vector to be propagated is that in the first equation presented in this Section. The belief propagation steps are executed for the branches σ^M_α$ of the tree that arrive close to ξ_{RH}. For all σ^M_α the planner evaluates the "belief gain" at the final configuration. This is calculated as the D-opt of the subset of the covariance matrix Σ_{p,f} that refers to the robot pose and landmarks:

enter image description here

Through this process, the path that leads to minimized localization uncertainty, while arriving at the viewpoint provided by the exploration planning step, is achieved.

For this belief propagation process, the prediction step happens with artificial IMU trajectories from a motion model of the vehicle simulated to travel along the branches of the relevant sampled tree, while the update step relies on the tracked landmarks and evaluating which of those are to be re-observed through the new sampled configurations within the map.

Explanation Video

The following video illustrates the basic steps of the RHEM planner:

RHEM planner demo

Clone this wiki locally