You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I printed the camera intrinsic parameters by cam.intrinsic_matrix and tried to convert depth to point cloud using the parameters. But it seem the point cloud result is dramatically incorrect? the depth image is ok, the intrinsic parameters are wrong?
Transforming point p (x,y,z) in the camera frame via K * p will produce p' (x', y', w) - the point in the image plane. To get pixel coordiantes, divide x' and y' by w
What's K, p, w',?
my code:
# init og, cam, etc...
third_cam = VisionSensor(
prim_path="/World/viewer_camera",
name="my_vision_sensor",
modalities=["rgb", "depth"],
enabled=True,
image_height=480,
image_width=640,
focal_length=17,
clipping_range=(0.01, 1000000.0),
)
third_cam.initialize()
print(third_cam.intrinsic_matrix)
# get [[1.62252451 0. 0. ]
# [0. 2.16336601 0. ]
# [0. 0. 1. ]]
# ...
def get_point_cloud(depth):
"""Projects the depth image Y into a 3D point cloud.
Inputs:
Y is ...xHxW 3,h,w
camera_matrix
Outputs:
X is positive going right
Y is positive into the image
Z is positive up in the image
XYZ is ...xHxWx3
"""
# Extract intrinsic parameters
K=[[1.62252451, 0., 0., ],
[0., 2.16336601, 0., ],
[0., 0., 1., ]]
fx = K[0][0]
fz = K[1][1]
cx = K[0][2]
cz = K[1][2]
grid_x, grid_z = torch.meshgrid(torch.arange(depth.shape[-2]),
torch.arange(depth.shape[-1]))
grid_x = grid_x.unsqueeze(0).expand(depth.shape)
grid_z = grid_z.unsqueeze(0).expand(depth.shape)
X_t = (grid_x - cx) * depth / fx
Z_t = (grid_z - cz) * depth / fz
depth=torch.from_numpy(depth)
XYZ = torch.stack(
(X_t, depth, Z_t), dim=len(depth.shape))
return XYZ
# .....
obs_dict=third_cam.get_obs()[0]
depth_image = obs_dict["depth"]
rgb_image = obs_dict["rgb"]
pc=get_point_cloud(depth_image)
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
I printed the camera intrinsic parameters by
cam.intrinsic_matrix
and tried to convert depth to point cloud using the parameters. But it seem the point cloud result is dramatically incorrect? the depth image is ok, the intrinsic parameters are wrong?I'm a bit confused in the
intrinsic_matrix
descriptionWhat's
K, p, w',
?my code:
Beta Was this translation helpful? Give feedback.
All reactions