-
Notifications
You must be signed in to change notification settings - Fork 179
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Predict pose using my images #120
Comments
Hey, @agenthong! |
Thanks for replying. |
You can find example here. |
Hi @agenthong @karfly , I also use my own data to predict 3D pose like this: |
Yeah, but I want to get 3D joints, it projects the tensor to 2D images. |
@agenthong |
@chaisheng-dawnlight |
Thanks a lot! So it means that this tensor is the 3D joints in world coordinate system? |
@karfly Hi, Thanks for your reply. I use the scatter function to draw the 3D pose, but it's still not work. This is my visualize code: |
@agenthong |
@chaisheng-dawnlight |
I have GT 3D keypoints like this: |
Hi, I'm also experiencing similar problem. I followed the comment like @karfly said.
The Ground Truth points looks like this, I used And the pretrained model prediction looks like this : I've plotted the 3d points like you said, and it looks like this : I currently have no idea what error made this result.. I used 4 views of single pose, with corresponding camera parameter. Here's the code How I got the predicted 3d points.
|
Hi, @karfly
Thanks for sharing this great repo.
I've trained the model using human3.6 dataset. After that, I use 2D heatmaps of other images to unproject with my own carlibration and feed to the trained model. But I find result like this:
Is this the 3D pose?
And I think maybe this result is in a different coordinate system. If yes, how can I get the corresponding poses in my images' coordinate system?
The text was updated successfully, but these errors were encountered: