You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
May I ask for more details regarding 2D backbone pretraining? Quoted from the paper:
The 2D backbone for Human3.6M was pretrained on the COCO dataset [10] and finetuned jointly on MPII and Human3.6M for 10 epochs using the Adam optimizer with 10^−4 learning rate.
I have the following questions:
What loss function do you use in pretraining?
The setting "for 10 epochs using the Adam optimizer with 10^−4 learning rate" seems only to apply to the fine-tuning on MPII and Human3.6M, then what's the setting for the COCO dataset?
Can you provide the pretraining code or recommend a GitHub repository for pretraining?
I found that pretraining is very important. I trained the volumetric model without loading pretrained weights of the 2D backbone, and the relative MPJPE increase drastically from ~20 to ~30. Thus, 2D backbone pertaining is quite necessary.
Look forward to hearing from you. Thank you!
The text was updated successfully, but these errors were encountered:
Hello,
May I ask for more details regarding 2D backbone pretraining? Quoted from the paper:
I have the following questions:
I found that pretraining is very important. I trained the volumetric model without loading pretrained weights of the 2D backbone, and the relative MPJPE increase drastically from ~20 to ~30. Thus, 2D backbone pertaining is quite necessary.
Look forward to hearing from you. Thank you!
The text was updated successfully, but these errors were encountered: