The following report describes the procedure adopted to determine the optimal poses of the RealSense holder for the iCub. The procedure consisted of the following steps:
- Identifying the nominal pose
- We identified a nominal pose starting from the pose conceived for the first holder prototype.
- Defining a search space
- We defined a search space by perturbing the nominal pose in position and orientation along the optical axis.
- Identifying the best poses
- We identified the best poses with respect to the robot, optimizing for the difference between the optimal and fitted superquadric.
- Choosing the final pose
- We chose the final pose by merging the results of the previous analysis.
Currently, iCubGenova01 has the neck pitch minimum value set at -37 degrees
. However, for other robots (see for example iCubGenova02 and iCubGenova04) the limit is set at -30 degrees
, also shared by the iCub models in Gazebo. We thus adopted this value, in order to rely on the official limits while being consistent with different iCubs.
In order to identify a nominal pose, we started from the one conceived for the first holder prototype, as defined here:
However, while the object is visible at high distances from the robot (~ 34 cm
), at lower distances (~ 22 cm
) the object does not fall into the field of view.
Therefore we decided to modify the orientation of the camera by tilting it of additional 30 degrees
and qualitatively evaluated it:
Finally the nominal position identified is (-8.4112 0.0 46.2464) cm
with a rotation of 30 degrees
around the optical axis.
The pose is identified with respect to the robot root, with X
pointing backward, Y
pointing right and Z
pointing upwards.
Given the nominal position we chose as (-8.4112 0.0 46.2464) cm
and a rotation of 30 degrees
, we perturbed its position and orientation around the optical axis within the following ranges:
- x direction:
[-3 1] cm
; - y direction:
[-6 6] cm
; - z direction:
[-10 0] cm
; - orientation:
[0 25] degrees
;
Such ranges were chosen considering that:
- along the x direction:
- beyond
-3 cm
the object is outside the Realsense FOV; - below
1 cm
the RealSense cannot be physically placed;
- beyond
- along the y direction:
- below
-6 cm
the object is outside the Realsense FOV; - beyond
6 cm
the object is outside the Realsense FOV;
- below
- along the z direction:
- beyond
-10 cm
the RealSense enters the iCub cameras' field of view for a tilt of the eyes of2.0 degrees
; - below
0 cm
the RealSense does not see the object.
- beyond
- for the orientation:
- below
0 degrees
the object is outside the Realsense FOV; - beyond
25 degrees
the robot itself enters the RealSense FOV.
- below
We further identified a suitable range for the object, considering the iCub grasping workspace:
23 cm |
34 cm |
---|---|
The following shows an example of the scenario for a fixed pose of the RealSense, comparing the output of the iCub left camera, the Real sense and the point cloud with the fitted superquadric:
iCub camera | RealSense | Point Cloud + Superquadric from RealSense |
---|---|---|
For identifying the best poses, we adopted the following strategy:
- the pose of the camera is updated within the ranges identified in Section 2;
- for each pose, the point cloud of the object is extracted and a superquadric fitted to it. Given the tested object is a bottle, the optimal superquadric is elongated, i.e. a dimension is higher than the other two. Therefore we compute a priori the optimal superquadric for a camera pose that maximizes the object visibility into the RealSense field of view and consequently define a good pose as the one which guarantees that the dimensions of the extracted superquadric are close to those of the optimal one.
Specifically, the following shows the fitted superquadric for a good view of the object, object out of the field of view and a top view, with the relative scores:
Good | Object out of FOV | Top View |
---|---|---|
score: 0.000404 |
score: 0.01016 |
score: 0.0065 |
Notably, when the view is good, the score is much lower if compared to those computed when the object is out of the field of view or the view is too much from the top.
The whole procedure is repeated for:
- object close (i.e. at
25 cm
); - object far (i.e. at
34 cm
); - object in the middle (i.e. at
29.5 cm
);
The result we have when the object is as close as possible to the robot is shown in the following:
Best pose | Set of suitable poses |
---|---|
The nominal pose defined here is shown in red.
The best pose identified is at (-7.4112 -6 36.25) cm
and -5.07455 degrees
. However, there is a range of suitable poses, shown on the right, for which the computed score was below 0.005
.
The result we have when the object is far from the robot is shown in the following:
Best pose | Set of suitable poses |
---|---|
The best pose identified is at (-7.4112 -2 36.25) cm
and -25.0746 degrees
. However, there is a range of suitable poses, shown on the right, for which the computed score was below 0.005
.
The result we have when the object is placed in the middle between 25 cm
and 34 cm
is shown in the following:
Best pose | Set of suitable poses |
---|---|
The best pose identified is at (-11.4112 -6 36.25) cm
and -5.07455 degrees
. However, there is a range of suitable poses, shown on the right, for which the computed score was below 0.005
.
By merging the results obtained for different object's positions, the range of suitable poses for which the computed score is below 0.002
is the following:
Since the central poses of the camera provide a reasonable score while keeping the object in the center of the field of view when the head is centered, we further reduced the range of optimal poses, by additionally removing the one in the middle of the eyes as it covers the eyes:
The final set of pose is reported in the following table, showing the extracted superquadric when the object is far, close and in the middle position.
Suitable poses | Far | Close | Middle |
---|---|---|---|
(-9.4112 0 40.2464)cm -25 deg |
|||
(-8.4112 0 40.2464)cm -25 deg |
The final poses are defined with respect to the nominal one.