Skip to content

Commit

Permalink
[ONNX][IMPORT] Add int8 support in onnx_importer
Browse files Browse the repository at this point in the history
Signed-Off-by: Gaurav Shukla <[email protected]>
  • Loading branch information
Shukla-Gaurav authored and vinayakdsci committed Sep 10, 2024
1 parent b375cd0 commit 95e18f5
Showing 1 changed file with 8 additions and 0 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -387,6 +387,14 @@ ContextCache::ConvertTensorProtoToAttr(const onnx::TensorProto &tp) {
case onnx::TensorProto::DataType::TensorProto_DataType_FLOAT:
return mlirDenseElementsAttrFloatGet(tensor_type, tp.float_data_size(),
tp.float_data().data());
case onnx::TensorProto::DataType::TensorProto_DataType_INT8: {
std::vector<int8_t> int8_conversion;
int8_conversion.reserve(tp.int32_data_size());
for (int32_t v : tp.int32_data())
int8_conversion.push_back(v);
return mlirDenseElementsAttrInt8Get(
tensor_type, int8_conversion.size(), int8_conversion.data());
}
case onnx::TensorProto::DataType::TensorProto_DataType_INT32:
return mlirDenseElementsAttrInt32Get(tensor_type, tp.int32_data_size(),
tp.int32_data().data());
Expand Down

0 comments on commit 95e18f5

Please sign in to comment.