-
Notifications
You must be signed in to change notification settings - Fork 179
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Creating new "ground truth" for several datasets #19
Comments
Hey, thank you for your interest. Next step for us is to add CMU Panoptic dataset support. Then we can think about adding other multi-view datasets. We have some vague plans about annotating/reannotating datasets. Maybe the community can help us with it? 😊 |
If I want to train the model with CMU Panoptic dataset ,that means I'd better modify the dataset prepare codes based on Human3.6M precesssing way? Can you add CMU Panoptic dataset support? Can you add CMU Panoptic dataset more details in your paper?Thanks a lot! |
@dulibubai What exact details of CMU Panoptic dataset training are you interested in? |
When I download CMU dataset in its official network, I found most of the sequences have not labels,only some have , and there many people in most of them ,just single human is several..So when you train with it,which sequences you chose before,and how to split the train,val and test dataset when you train the model with CMU dataset? Thanks again. |
@dulibubai Each scene contains multiple recorded persons => for each person an interval is provided in format train:
- 171026_pose3
- [1000, 3000]
- 171026_pose2
- [1000, 7500]
- [8000, 14000]
- 171026_pose1
- [380, 7300]
- [7900, 14500]
- [15400, 22400]
- 171204_pose4
- [500, 4300]
- [4900, 8800]
- [9400, 13200]
- [14200, 17800]
- [18700, 22500]
- [23050, 27050]
- [28000, 31600]
- 171204_pose3
- [500, 4400]
- [5400, 9000]
- 171204_pose2
- [350, 4300]
- [5000, 8800]
- [9600, 13600]
- [14300, 18500]
- [19600, 23500]
- [24200, 28200]
- [28800, 32800]
- [33500, 37700]
- 171204_pose1
- [300, 4100]
- [4800, 8900]
- [10000, 13600]
- [14000, 18200]
- [18500, 22900]
- [23500, 27600]
val:
- 171204_pose5
- [400, 4300]
- [5000, 8500]
- [9500, 13400]
- [14200, 18000]
- [19000, 22600]
- [23500, 27100]
- 171204_pose6
- [1000, 4500]
- [5150, 9100]
- [9830, 13800]
- [14370, 18300]
- [19000, 22900]
|
Thanks for your sharing sincerely! As show in CMU dataset, every sequences have 31 cameras,and how to split 31 cameras images for train and val dataset? Thanks again. |
@dulibubai |
Yeah! Thanks a lot! |
Hi! I have another question, Is it convenient to provide the 2d bbox lable file in every camera images with CMU that extracted by object detection net? |
I've uploaded our Mask R-CNN detections to the Google Drive. detection == (left, upper, right, lower, confidence) |
Thanks a lot! |
@karfly , |
|
@shrubb ,Thanks a lot! Thanks very much! |
@dulibubai |
@shrubb , When train the model with cmu dataset, how to set the follow paras? |
We used same parameters as for Human3.6M. Paper experiments were done with single GPU, but you can use multiple GPUs to reduce training time. |
@karfly |
@dulibubai |
@karfly |
@dulibubai |
@karfly |
Hi @karfly what about the cameras for training? Did you just use all the other cameras? Also in my own attempts to test/train (#75 #77) I found that the projection matrix data for cameras 25 and 29 were off. |
Hi, @Samleo8! |
Hello. could you upload that Google Drive again? |
Hi, thanks for this amazing work.
Do you have any plans for running your method on other datasets and releasing the resulting poses? This would be very beneficial for correcting many ground truth errors. Specifically I'm thinking of MPI-INF-3DHP (some annotations are wrong) and HumanEva-I (in some sequences the head ground truth is wrong), in addition the already mentioned H3.6M (problems with S9) and CMU-Panoptic (ground truth unavailable for many sequences, e.g. dance + errors).
I think your results could be better than the original ground truth in many cases. So by e.g. leave-one-subject-out training and testing, one could generate new, polished "ground truth" for each subject of a particular dataset (to avoid memorizing the training set errors).
The text was updated successfully, but these errors were encountered: