-
Notifications
You must be signed in to change notification settings - Fork 125
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about SimpleGPsOdometry #50
Comments
We don't need to transform the pose from the GPS ENU (East, North, Up) system to the Lidar system. The ENU system is absolute, with fixed east and north directions, and can be represented by either an ellipsoid model (with the current origin as the origin of the ENU system) or a plane model (UTM system, where altitude is assumed to equal z). On the other hand, the LIO system is relative and is initialized at any angle from the starting point, typically using a unit matrix for pose. The difference between these two systems is merely a rotation. In this project, we've adopted a batch processing method to align these two coordinate systems effectively. This method ensures that the initial segment of GNSS is of good quality and has sufficient displacement. We then use a factor graph to introduce the GPS factor and odom factor for batch optimization. The final fused pose trajectory is unified into the GNSS ENU GPS receiver coordinate system, following the implementation of the GPS factor. The reason for unifying everything into the GNSS ENU coordinate system is its absolute nature. If the point cloud map is to be used for positioning based on a prior map and needs to be combined with GNSS for positioning, the coordinate system of the prior map must be unified to GNSS. This requires the WGS 84 coordinates of the point cloud map's origin. Without this unification, it would be impossible to use GNSS for positioning, or additional initialization steps would be required. Regarding the external parameters of the Lidar and GNSS, there is a rigid body transformation between these two sensors, making direct calibration challenging. Accurate external parameters are beneficial when there's a need to obtain the pose trajectory in the optimized Lidar coordinate system. However, for the map, these parameters are not crucial. |
Right, calculated in Eigen::Vector3d t = -LLA2ECEF(lla_origin_);
Eigen::Matrix3d r = ...
Eigen::Vector3d enu;
enu = ecef + t;
enu = r * enu; I am mainly confused about this statement.
Where in the code does this happen? To my understanding, the unaligned poses of GPS (in ENU, relative) and Lidar (in local ROS right hand xyz, relative) are put into the factor graph by |
The following code demonstrates the process of batch optimization. That is, when a sufficient number of GPS Factors are introduced, ISAM will incrementally update, thus achieving trajectory fusion (i.e., migrating the whole to the GPS coordinate system).
You can understand the GPS factor as a unary 3-DOF prior pose constraint, while the odom factor is a relative constraint between two consecutive poses. Essentially, it's a least squares problem. Given weights (noise model), the goal of least squares is to minimize the sum of the prior pose error and the relative pose error. The prior pose error accounts for the majority so the final fused pose trajectory will be pulled to the GNSS coordinate system. For more detailed information about the GPS factor, you can refer to the implementation in GTSAM, which is very straightforward. |
Ah, ok, thank you very much! I saw that line but had no idea all the magic is hidden in |
You can regard it as the Umeyama algorithm in the EVO tool, which is used to align two pose trajectories and solved by the least squares method. Thanks to the global optimization capability of the Factor graph, we don't need to complete the coordinate system transformation first like in the origin version of LIO-SAM that relies on robot-localization. Considering that this could lead to initialization failure in areas where the GNSS quality is poor at the starting position (such as the data in hkust-full-xx.bag), the batch processing method in this project doesn't need to worry about the initial GNSS quality. Fusion can be completed as long as there are enough GNSS observations in the overall databag. |
Nice, yes, umeyama in EVO is a nice way to think about it. Unrelated: Is there any way I can contact you via email? |
|
At the moment, Now, if I want to add a GPS loop factor based on radius search as well, I don't think I can get around transforming GPS coordinates into the lidar frame manually (no GTSAM umeyama style alignment), because the GTSAM BetweenFactors are all added in the lidar frame? Am I correct in my assumption? |
@juliangaal I think maybe you need to reconsider this issue. The GNSS coordinate system is an absolute ENU (East, North, Up) system, and the fusion of GNSS and LIO generally requires calibration. There are two types of calibrations:
In this repo, the calibration between GNSS and LIDAR falls under the second category. We assume that GNSS and IMU are installed together (the external parameters from LIDAR to IMU include this translation). However, there's an issue with the second calibration method regarding the yaw. If you change the starting point, the observed coordinate values of GNSS will change (meaning different origins of the ENU system), which implies different yaw values. Even the number of GNSS observations used for fusion can affect the estimation of the yaw angle. As a result, it's challenging to calibrate a fixed yaw offline. If you want a fixed yaw to transform GNSS observations to the LIO system, this isn't a mature approach. Many papers have shown that fusing GNSS in the LIO/VIO system can lead to non-objectivity in 4 degrees of freedom, resulting in larger errors. On the other hand, fusion in the ENU system ensures observability in all 6 degrees of freedom (as referenced in VIO studies). |
上文中的这一句话,这里面说的ENU坐标系,在计算航向角时,我觉得是可以认为ENU坐标系不发生改变,在小范围内可以忽略不计。我是测绘方向的研究生 |
这就是一个简单的位置差分,认为瞬时在平面运动,roll pitch为0,姿态就是yaw,计算的参考位姿而已,画图用的,与优化无关。 |
If I understand your code correctly, you convert LLA -> ECEF -> ENU, and add this value to the factor graph.
I don't quite understand how this is possible, since the keyframes in
addOdomFactor
are in the lidar frame, not ENU. How do you convert the ENU pose into the robot's frame?Also, could you explain this comment? The IMU is in it's own frame in normal systems, to my understanding
Thanks a lot for your help!
The text was updated successfully, but these errors were encountered: