-
Notifications
You must be signed in to change notification settings - Fork 1
Home
TJArk Vision Ball Perceptor Usage In order to use this code, the B-Human's framework should be needed. If you want to use this code based on other code releases, you might need to make some change and manually adapt this code to your own code base. In order to implement this module, a copy of the modules and representations must be placed in their corresponding folders, Src/Modules/Perception and Src/Representations/Perception respectively. For further information, please refer to the BHuman Coderelease.
For the sake of simplicity, we follow the same structure as the code. There are two modules in this algorithm: BallSpotProvider and BallPerceptor. This page will make a short description of these two modules.
BallSpotProvider: This module is responsible for finding ballspots that might be the center of a ball and transferring them to the BallPerceptor. As the ball used in RoboCup2016 has a black and white ball print on it, so we need to search all the white regions (provided by the ScanLine representation which provides image regions classified by colors. For further information, please refer to the BHuman Coderelease.) and use some criteria to discard the points that is impossible a ball obviously. More information can be found in the BallSpotProvider page.
BallPerceptor: BallPerceptor is in charge of finding a most possible ball among all the ballspots which are provided by BallSpotsProvider. It also calculates the radius, center and distance of the ballspots. Then it transfers them to blackboard so that other modules can use it for further computation. More information can be found in the BallPerceptor page.
BallSpotProvider page:
There are mainly two steps in BallSpotProvider. First step: The first step is realized by the function called searchScanLines(). It searches all the white regions obtained by the scan lines that scan from the top to the bottom of the image, which is provided by representation called ScanlineRegionsClipped. Then it calculates the midpoint of the segment that was the overlapping part of scanlines and white regions. We use this point as an initial point to scan up, down, left and right to find the edge points of the white regions. Those edge points might be the edge points of realistic ball. When the color changes from black or white to green, we consider it is ready to reach the edge points. In order to eliminate the influence of noises, we set a parameter called skipped. Only when it skips several green pixels consecutively, can we stop the scan. After that we can get four possible edge points and calculate the ballspot center position by using the following fomula: x = ((left_stop_pixel + left_skipped) + (right_stop_pixel – right_skipped)) / 2 y = ((up_stop_pixel + up_skipped) + (up_stop_pixel – up_skipped)) / 2 Then we can calculate the center and radius of the ballspot according to the four edge points, which will be used by the filter in second step.
Second step: The second step is realized by the function call getBallSpot(). It uses three criteria to judge whether it is a valid ballspot, if not, just discard it. Those criteria are detailed as follow: (1) Is the ballspot on robots? Discard it if return true. It is realized by the function called isOnRobot(). (2) Is the ballspot's distance to the robot valid? Discard it if return false. (3) Is the center of the ballspot a white pixel or a black pixel? Discard it if return false. After filtering the candidate ballspots using the above three criteria, we can discard a lot of invalid ballspots and reduce a lot of computation.
BallPerceptor page:
The BallPerceptor includes two function mainly: fitball() and classifyBalls2(). The fitball() function finds 24 edge points based on the ballspots and try to fit a circle using those edge points. If success, it saves the ballspot as a possible ball. Then all the possible balls are transferred to the second function called classifyBalls2(). classifyBalls2() is a classifier which uses some criteria to decide whether the possible ball is a valid ball. This classifier also assigns a score to those possible ball that meet the criteria, so that we can choose the most possible ball from them. These two function will be detailed as follow:
fitball(): fitball() is in charge of finding the edge points of a ballspot and trying to fit a circle based on those edge points. If success, it considers the ballspot as a possible ball. This function consists two steps:
First step: choose three points called guesspoint inside the circle whose center is ballspot and radius is calculated in BallSpotProvider. These three guesspoints should not be on a same line, so that the edge points found will not repeat. Then using these three guesspoints as start points to trace from the guesspoint to the region’s extrema. There are eight scan lines and they will finish when finding enough green pixels. The scan lines’ scanning directions and guesspoints are shown by the following figures: guesspoint_1: ![image]https://github.com/TJArk-Robotics/Vision_Ball_Perceptor_2016/blob/master/guesspoint1.png guesspoint_2:
guesspoint_3:
Second step: After the first step, we get 24 edge points of a ballspot. In the second step we use these edge points to try to fit a circle. We use RANSAC algorithm to fit them. Since the method of least squares can be influenced by noise easily and the edge points found in the first step sometimes contain a few noise point, the RANSAC algorithm can get a better performance than the least squares method. Through these two steps descripted above, we get some possible ball. Then we re-calculate their center and radius and transfer them to classifyballs2().
classifyBalls2(): This function is in charge of deciding whether the possible ball found by fitball() is valid. If valid, save it as a candidate. In the end, it will choose a most possible ball as ballPercept among those candidates according to their scores. In the beginning, it uses OTSU algorithm to find a suitable threshold so as to distinguish the black pixels and white pixels inside the possible ball. With this algorithm we can distinguish correctly in spite of the frequently changing of lighting condition. Then this function goes through all the pixels inside the square whose center is possible ball’s center and side length is the possible ball’s diameter. After that, it can get several statistical data to decide whether it is an valid ball. All the statistical data are listed as follow:
(1) ratioGreen: The ratio of green pixels which lie in the square and stay outside the circle among the totality of pixels outside the circle. It is only convinced to be a ball on condition that the ratio is larger than a certain threshold. And if a valid ball is confirmed, there must be a certain quantity of green pixels around since the ball is definitely on the ground. (2) ratioTotal: the ratio of black or white pixels among the totality of pixels in the circle. (3) varY: the variance of gray scale provided by the black and white pixels in the circle. This parameter protects the detected possibleball from being wrongly confirmed as a ball. (4) whitePercent: the proportion of white pixels in the circle. (5) ratio: the ratio between black and white pixels in the circle. (6) meanWhite: the average gray scale of white pixels in the circle. (7) meanBlack: the average gray scale of black pixels in the circle (8) Score: the score of possibleball can be achieved by :
Score = tansig(ratio, 0.3f) * 0.2f + tansig(ratilTotal, 1.2f) * 0.4f + tansig(ratioGreen, 0.7f) * 0.4f
The tansig function can be seen in the BallPerceptor.cpp
It is only confirmed to be a valid ball if all the 7 indexes above meet the requirements and the score exceeds the threshold. Finally, if no valid ball is confirmed, we can simply comprehend that as there’s no ball in the field of vision. If more than one ball is confirmed, we’ll take the one with the highest score as the convincing one.