Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Value error #17

Open
Zumbalamambo opened this issue Sep 5, 2020 · 11 comments
Open

Value error #17

Zumbalamambo opened this issue Sep 5, 2020 · 11 comments

Comments

@Zumbalamambo
Copy link

I tried depth_estimation.ipynb.

# warp by depth
i_trans_j = i_pose_w @ torch.inverse(j_pose_w)
k_trans_j = k_pose_w @ torch.inverse(j_pose_w)
dst_trans_src = torch.cat([i_trans_j, k_trans_j], dim=0)

depth_src = depth_j
intrinsics_src = intrinsics_j

image_src = image_j
image_dst = torch.cat([image_i, image_k], dim=0)

image_dst_to_src = kornia.warp_frame_depth(image_dst, depth_src, dst_trans_src, intrinsics_src)
print(image_dst_to_src.shape)

It throws the following error,

ValueError: Input batch size must be the same for both tensors or 1

@edgarriba
Copy link
Member

@Zumbalamambo what are the actual shapes of your inputs ?

@Zumbalamambo
Copy link
Author

@edgarriba torch.Size([1, 3, 240, 320]) is the actual size.

I got the same error on my other computer too. I just cloned the repo and tried this without varying any single parameters or files.

@Zumbalamambo
Copy link
Author

print(image_i.shape) -> torch.Size([1, 3, 240, 320])
print(image_j.shape) -> torch.Size([1, 3, 240, 320])
print(image_k.shape) -> torch.Size([1, 3, 240, 320])

print(image_dst.shape) -> torch.Size([2, 3, 240, 320])
print(depth_src.shape) -> torch.Size([1, 1, 240, 320])
print(dst_trans_src.shape) -> torch.Size([2, 4, 4])
print(intrinsics_src.shape) -> torch.Size([1, 3, 3])

@edgarriba
Copy link
Member

@Zumbalamambo you batch size should be equal for every input tensor, need to double check that broadcasting works here.

@Zumbalamambo
Copy link
Author

@Zumbalamambo I'm not sure what changes the batch size. I did not do anything different. Its the fresh clone

@edgarriba
Copy link
Member

@Zumbalamambo What I mean is that your depth_src should be as same as image_dst and also intrinsics_src.
Try this: depth_src = depth_src.expand(image_dst.shape[0], -1, -1, -1)

@Zumbalamambo
Copy link
Author

Zumbalamambo commented Sep 8, 2020

@edgarriba that works but it throws the following error in the last cell,

`---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-13-76c8d1a0dfb0> in <module>
     26     compute_scale_loss(image_dst_scale, image_src_scale, depth_src_pred,
     27                        dst_trans_src, intrinsics_src_scale, optimizer,
---> 28                        num_iterations, error_tol)
     29 
     30     print('Train iteration: {}/{}'.format(iter_idx, num_levels))

<ipython-input-12-14f8071ed76a> in compute_scale_loss(image_dst, image_src, depth_src, dst_trans_src, intrinsics_src, optimizer, num_iterations, error_tol)
     15 
     16             image_dst_to_src = kornia.warp_frame_depth(
---> 17                 image_dst, depth_src_tmp, dst_trans_src, intrinsics_src)
     18 
     19             ones = kornia.warp_frame_depth(torch.ones_like(image_dst),

~/anaconda3/envs/slam/lib/python3.6/site-packages/kornia/geometry/depth.py in warp_frame_depth(image_src, depth_dst, src_trans_dst, camera_matrix, normalize_points)
    163 
    164     # apply transformation to the 3d points
--> 165     points_3d_src = transform_points(src_trans_dst[:, None], points_3d_dst)  # BxHxWx3
    166 
    167     # project back to pixels

~/anaconda3/envs/slam/lib/python3.6/site-packages/kornia/geometry/linalg.py in transform_points(trans_01, points_1)
    203         raise TypeError("Tensor must be in the same device")
    204     if not trans_01.shape[0] == points_1.shape[0] and trans_01.shape[0] != 1:
--> 205         raise ValueError("Input batch size must be the same for both tensors or 1")
    206     if not trans_01.shape[-1] == (points_1.shape[-1] + 1):
    207         raise ValueError("Last input dimensions must differe by one unit")

ValueError: Input batch size must be the same for both tensors or 1
`

The error points to

@edgarriba
Copy link
Member

@Zumbalamambo you need to do the same for the intrinsics matrix. See: https://colab.research.google.com/drive/1DX4l_ssSEXSofOR3Hm9TcgRwFDWVEvJO?usp=sharing

@Zumbalamambo
Copy link
Author

@edgarriba unfortunately the colab file is protected :(

@edgarriba
Copy link
Member

@Zumbalamambo try again. Let me know if it works and if we should update the original example.

@Zumbalamambo
Copy link
Author

@edgarriba that works but still the last cell of the notebook throws the error that I have mentioned above.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants