-
-
Notifications
You must be signed in to change notification settings - Fork 331
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Shape mismatch when trying to use different strides in decoder #498
Comments
You can try to keep see related code: |
Thanks, I had seen that before and that's how I figured I might be able to change the extra stride. |
Sorry, there are no relevant documents available; But feel free to ask questions whenever you have any doubts. |
Thanks a lot. I have a question about the object queries. if im not mistaking, you mentioned the queries are optimized during the training process, and by looking at code the i figured this optimization is indirect and it is based on image features, since there is no computation regarding queries in the matcher nor loss function. |
Hi, I tried to train RT-DETR with higher res (1280x1280), using default settings, the result were disappointing.
According to #187 , you mentioned we can use deeper look into features by configuring the
num_layers
in decoder.I did it and training is fine so far but there is a problem. When i try to change the
feat_strides
by adding a 64 to it ([8,16,32,64]
), i get anRuntimeError: The size of tensor a (34000) must match the size of tensor b (35200) at non-singleton dimension 1
during inference (evaluation) which is related to :but when i set the
feat_strides
to[8,16,32,32]
it works fine. How is that? shouldn't the shapes be consistent?The text was updated successfully, but these errors were encountered: