-
Notifications
You must be signed in to change notification settings - Fork 203
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The converted model does not perform well #307
Comments
@yakupakkaya Do you know patch_d2_meta_arch() function in this code? |
@yakupakkaya Could you try exporting the model in fp32 first? i.e. change "torchscript_int8" to "torchscript", this model should produce the exact same result compared with original pytorch model, this can help identify if the issue is from the pipeline or quantization. If the issue is on (post-training) quantization, since it relies on the data to do calibration, so please check the dataset and maybe try increasing |
Hi. I am facing similar issue. D2Go does not work as expected when trained on the balloon dataset in beginner tutorial. The problem occurs before the model has been exported. After 600 training iterations, the The output log on my computer:
Tested on:
Unexpected vs. Expected output:We can also see the results ran from an older jupyter notebook https://github.com/TannerGilbert/Object-Detection-and-Image-Segmentation-with-Detectron2/blob/592960ddc4243ff34af89a38124452a75309aa1c/D2Go/D2GO_Introduction.ipynb is working well, but the newer one does not work well: https://github.com/TannerGilbert/Object-Detection-and-Image-Segmentation-with-Detectron2/blob/master/D2Go/D2GO_Introduction.ipynb Unexpected output:Here are some sample output based on intro tutorial from @TannerGilbert code: Expected output: |
Hello, please tell me how to quantize the trained model with custom data into int8. I have been failing |
Hi there,
I couldn’t find a way to quantize the model successfully, I tried YOLO models. They are better when they are optimized.
Best,
Yakup
From: yuzhuhua ***@***.***>
Date: Thursday, January 18, 2024 at 3:30 AM
To: facebookresearch/d2go ***@***.***>
Cc: Yakup Akkaya ***@***.***>, Mention ***@***.***>
Subject: Re: [facebookresearch/d2go] The converted model does not perform well (Issue #307)
Attention : courriel externe | external email
Hello, please tell me how to quantize the trained model with custom data into int8. I have been failing
—
Reply to this email directly, view it on GitHub<#307 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AOFFV4ZSAV6NO3W3YVZB5VLYPDMP3AVCNFSM5ZJSXLJ2U5DIOJSWCZC7NNSXTN2JONZXKZKDN5WW2ZLOOQ5TCOBZHAYDCNJVHEZA>.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
I trained a model with the following configs as in the demo code;
And converted the trained model to int8 model.
The inference results with the converted model is not even close the original model. It has limited detections over %50 confidence score and they are irrelevant.
The text was updated successfully, but these errors were encountered: