Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to obtain bbox file? #26

Closed
ericzw opened this issue Nov 7, 2019 · 10 comments
Closed

How to obtain bbox file? #26

ericzw opened this issue Nov 7, 2019 · 10 comments

Comments

@ericzw
Copy link

ericzw commented Nov 7, 2019

Hi, could you tell me how to generate bounding box file?
Or, could you provide the bounding box files you used?

@shrubb
Copy link
Collaborator

shrubb commented Nov 7, 2019

Hi @qiuzhongwei-USTB,

for Human3.6M, follow the instructions in here for ground truth boxes, or see this Mask R-CNN boxes;

for CMU Panoptic, the bounding boxes are not yet ready, as is the whole pipeline, sorry for that.

@ericzw
Copy link
Author

ericzw commented Nov 8, 2019

Thanks,
could you tell me what's the difference of using undistort images and origin image?

@shrubb
Copy link
Collaborator

shrubb commented Nov 8, 2019

Please read about camera models, for example, here. With original images, the ground truth 3D points will project to slightly inaccurate locations in images (still may be OK, but the quality will be lower). Undistortion corrects this.

@ericzw
Copy link
Author

ericzw commented Nov 8, 2019

Thanks for your reply! I met some problems again.

  1. I found that it's very slow when I evaluate your pre-trained model(about 70 s/img, one V100 GPU).
    Could provide your train times and test times on Human3.6 dataset?

  2. There is an error that when I set 'retain_every_n_frames_in_test=5 or 64'.

error information:
Experiment name: [email protected]:24:58
5it [06:07, 73.45s/it]
/root/v-helzhe/zw/h36m/data/learnable-triangulation-pytorch/mvn/datasets/human36m.py:222: RuntimeWarning: invalid value encountered in true_divide
action_scores[k] = v['total_loss'] / v['frame_count']
/root/v-helzhe/zw/h36m/data/learnable-triangulation-pytorch/mvn/datasets/human36m.py:222: RuntimeWarning: invalid value encountered in double_scalars
action_scores[k] = v['total_loss'] / v['frame_count']

@shrubb
Copy link
Collaborator

shrubb commented Nov 8, 2019

Not sure. Let us look into this this after the weekend.

@ericzw ericzw closed this as completed Nov 10, 2019
@shrubb
Copy link
Collaborator

shrubb commented Nov 10, 2019

@qiuzhongwei-USTB did you close the issue because you worked around those problems? If yes, how did you solve them?

@ericzw
Copy link
Author

ericzw commented Nov 11, 2019

@qiuzhongwei-USTB did you close the issue because you worked around those problems? If yes, how did you solve them?

Yes, v['frame_count'] could be zeros. I change the code like the follows:
for k, v in action_scores.items():
if v['frame_count'] > 0:
action_scores[k] = v['total_loss'] / v['frame_count']
else:
action_scores[k] = 0

@shrubb
Copy link
Collaborator

shrubb commented Nov 11, 2019

What about the long evaluation time?

@ericzw
Copy link
Author

ericzw commented Nov 11, 2019

I didn't solve these. I can bear the evaluation time when I use 8 GPU with distributed testing.
Can you provide your evaluation time?

@karfly
Copy link
Owner

karfly commented Nov 11, 2019

Hi @qiuzhongwei-USTB!
It's very strange that you get such big evaluation times. I've just checked our evaluation times on full Human3.6M validation dataset:

  • Algebraic: 1 hour 29 minutes
  • Volumetric: 2 hours 1 minute

We use single Nvidia Tesla P40 GPU for evaluation.

Could you provide full bash command, which you used for evaluation?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants