Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Uint8 quantisized model throws "struct.error" #120

Open
robinvanemden opened this issue Jun 13, 2020 · 4 comments
Open

Uint8 quantisized model throws "struct.error" #120

robinvanemden opened this issue Jun 13, 2020 · 4 comments
Assignees

Comments

@robinvanemden
Copy link
Contributor

robinvanemden commented Jun 13, 2020

Using WinMLTools floating point 32 into 8-bit integer optimization results in the following error:

Traceback (most recent call last):
  File "/usr/local/bin/onnx-cpp", line 11, in <module>
    load_entry_point('deepC==0.13', 'console_scripts', 'onnx-cpp')()
  File "/usr/local/lib/python3.6/dist-packages/deepC/scripts/onnx2cpp.py", line 65, in main
    dcGraph = parser.main(onnx_file, bundle_dir, optimize=False, checker=False)
  File "/usr/local/lib/python3.6/dist-packages/deepC/scripts/read_onnx.py", line 489, in main
    self.addParams(param);
  File "/usr/local/lib/python3.6/dist-packages/deepC/scripts/read_onnx.py", line 129, in addParams
    param_vals = struct.unpack(pack_format*param_len, param.raw_data) ;
struct.error: unpack requires a buffer of 432 bytes

The traceback seems to indicate that deepC ought to be able to convert the model, but encounters a minor issue - would you agree? See attached the uint8 optimized Resnet Cifar model we used to test the 8-bit integer quantisized model.

model.zip

@github-actions
Copy link

Thank you so much for filing the issue. We will look at it and take appropriate action as soon as possible.' first issue

@srohit0 srohit0 self-assigned this Jun 15, 2020
@srohit0
Copy link
Member

srohit0 commented Jun 15, 2020

Hello @robinvanemden, this model is written with IR_version 3, which is over 2 years old. deepC supports onnx 1.5 that accepts IR_Version 4 and above.

% compile-onnx model.onnx
Model info:
  ir_vesion :  3 
  doc       : 
...
...
Traceback (most recent call last):
  File "/home/aits/WORK/deepC/deepC/compiler/onnx2exe.py", line 98, in <module>
    sys.exit(main())
  File "/home/aits/WORK/deepC/deepC/compiler/onnx2exe.py", line 87, in main
    (bundleDir, cppFile) = onnx2cpp.main();
  File "/home/aits/WORK/deepC/deepC/compiler/onnx2cpp.py", line 65, in main
    dcGraph = parser.main(onnx_file, bundle_dir, optimize=False, checker=False)
  File "/home/aits/WORK/deepC/deepC/compiler/read_onnx.py", line 493, in main
    dnnc_param = self.addParams(param, saveInput=saveInput)
  File "/home/aits/WORK/deepC/deepC/compiler/read_onnx.py", line 126, in addParams
    param_vals = struct.unpack(pack_format*param_len, param.raw_data) 
struct.error: unpack requires a buffer of 432 bytes

Do you have newer version of this model? If not, please use onnx version converter
and try again.

@robinvanemden
Copy link
Contributor Author

Thanks for you fast response! I actually converted the model down - I will try again with the higher IR version.

@robinvanemden
Copy link
Contributor Author

My apologies for not following up faster - but see attached an updated version of the model, which seems to throw the same error.

model.zip

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants