You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hey there i have been trying to quantize and run the ONNX model using Vitis-AI 3.5 . I have tried previous versions also but they all propose a on the go quantisation.
so long story short i compiled the onnx model using vai_q_onnx quantisation completed and i got the quantized onnx model. But how can i run this on my KV260 board? Is there a way to convert this onnx to xmodel? or running it using VitisAIExecutionProvider is the only way?
BTW the execution provider method is causing a lot of pain and errors mainly due to version incompatabilities .
Any help regarding this is appreciated
Regards
The text was updated successfully, but these errors were encountered:
Hey there i have been trying to quantize and run the ONNX model using Vitis-AI 3.5 . I have tried previous versions also but they all propose a on the go quantisation.
so long story short i compiled the onnx model using
vai_q_onnx
quantisation completed and i got the quantized onnx model. But how can i run this on my KV260 board? Is there a way to convert this onnx to xmodel? or running it using VitisAIExecutionProvider is the only way?BTW the execution provider method is causing a lot of pain and errors mainly due to version incompatabilities .
Any help regarding this is appreciated
Regards
The text was updated successfully, but these errors were encountered: