Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Confusion about final loss #49

Open
rjtshrm opened this issue Jun 30, 2021 · 0 comments
Open

Confusion about final loss #49

rjtshrm opened this issue Jun 30, 2021 · 0 comments

Comments

@rjtshrm
Copy link

rjtshrm commented Jun 30, 2021

@zmurez-ml First of all, thanks for this amazing work. I am training the network with some modification however during training, the networks learns to output only 1's as the loss is dominated by free space. To solve this you mentioned in the paper that the loss is only back propogated through observed space (i.e., between 1 and -1) which you also used. However, I am confused that if this is the case and we mask the final output for target tsdf values b/w 1 and -1, how does network learn the free space as no loss is calculated here i.e., no flow of gradients.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant