A solution to filter obstacles on z axis for predicted objects #3857
Replies: 2 comments 1 reply
-
@beyzanurkaya I appreciate your initiation of this discussion and very comprehensive explanation. The filter could be used by set use_low_height_cropbox = true and the maximum height max_z could be set here. Then unknown of tree as your figure will not be detected or unknown object's footprint will be shrinked and might not affect to ego planner. This may be a temporary solution and we should explore more robust approaches. |
Beta Was this translation helpful? Give feedback.
-
未来可以通过地面分割后的结果计算高度进行过滤,这样可以解决上下坡点云过滤问题 |
Beta Was this translation helpful? Give feedback.
-
In autonomous driving, accurate detection and tracking of objects are critical for safe and reliable operation. One of the challenges faced in predicted objects is in filtering on the z-axis. This discussion aims to explore potential solutions and strategies for improving object detection and filtering techniques in Autoware.
In Autoware, the filtering of predicted objects in the Z-axis is currently handled with the following function:
Here is the original code: obstacle_stop_planner
However, I believe this implementation is not sufficient because it filters based on the object's shape, not the points associated with the object.
We are currently conducting autonomous vehicle tests on a university campus with many trees and bushes. These trees and bushes are currently classified as
UNKNOWN
objects, and the polygon drawn for these objects looks like the following:These are images from our campus tests:
As shown in the images, there are no points inside the road, but the branches of the tree extend into the road on the Z-axis. The polygon is drawn based on this, even though there are no obstacles inside the road. This can lead to trees being avoided by the
behavior_path_planner
and trees being stopped by theobstacle_cruise_planner
modules, even if there is no actual obstacle in the path.avoiding-to-bushes.mp4
stopping-for-bushes.mp4
Filtering predicted objects in the Z-axis is not possible on the planning side. This is because when objects reach the planning stage, we do not have enough information for filtering because the polygon drawn for the clustered object does not completely enclose the tree. If cropping were to be done on the planning side, we cannot be sure if points belonging to the clustered object are inside the cropped region. This is because a similar polygon to that of the tree could be created for another object, and in that case, the deleted points would be important.
In summary, currently, obstacle_stop_planner performs Z-axis filtering using pointcloud. However, we believe that applying this filtering to predicted objects becomes too late on the planning side. We think that this filtering should occur before objects are clustered, and we consider it more of a design issue. Therefore, we would like to gather input from both the perception-sensing and planning teams.
So, do you have any opinions or suggestions?
@mitsudome-r @mehmetdogru @maxime-clem @takayuki5168 @miursh @lchojnack @kaancolak @Zeysthingz @armaganarsln @ismetatabay
Beta Was this translation helpful? Give feedback.
All reactions