Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

vitis AI pytroch量化模型 #1484

Open
hanyinrui opened this issue Oct 16, 2024 · 0 comments
Open

vitis AI pytroch量化模型 #1484

hanyinrui opened this issue Oct 16, 2024 · 0 comments

Comments

@hanyinrui
Copy link

在加载模型进行quantizer = torch_quantizer('calib', model, (input_args))时,
KeyError Traceback (most recent call last)
/tmp/ipykernel_222/3548169973.py in
3 test_data=test_data.unsqueeze(1)
4 input_args = test_data[0:100] # 确保它是一个 torch.Tensor
----> 5 quantizer = torch_quantizer('calib', model, (input_args))
6 torch.save(quant_model, 'vit1.10_model.pth')

/opt/vitis_ai/conda/envs/vitis-ai-pytorch/lib/python3.7/site-packages/pytorch_nndct/apis.py in init(self, quant_mode, module, input_args, state_dict_file, output_dir, bitwidth, mix_bit, device, lstm, app_deploy, qat_proc, custom_quant_ops, quant_config_file)
96 lstm_app = lstm_app,
97 custom_quant_ops = custom_quant_ops,
---> 98 quant_config_file = quant_config_file)
99 # Finetune parameters,
100 # After finetuning, run original forwarding code for calibration

/opt/vitis_ai/conda/envs/vitis-ai-pytorch/lib/python3.7/site-packages/pytorch_nndct/qproc/base.py in init(self, quant_mode, module, input_args, state_dict_file, output_dir, bitwidth_w, bitwidth_a, mix_bit, device, lstm_app, custom_quant_ops, quant_config_file)
149 state_dict_file=state_dict_file,
150 quant_mode=qmode,
--> 151 device=device)
152
153 # enable record outputs of per layer

/opt/vitis_ai/conda/envs/vitis-ai-pytorch/lib/python3.7/site-packages/pytorch_nndct/qproc/utils.py in prepare_quantizable_module(module, input_args, export_folder, state_dict_file, quant_mode, device)
191 # parse origin module to graph
192 NndctScreenLogger().info(f"=>Parsing {_get_module_name(module)}...")
--> 193 graph = parse_module(module, input_args)
194 NndctScreenLogger().info(f"=>Quantizable module is generated.({export_file})")
195 # recreate quantizable module from graph

/opt/vitis_ai/conda/envs/vitis-ai-pytorch/lib/python3.7/site-packages/pytorch_nndct/qproc/utils.py in parse_module(module, input_args, enable_opt, graph_name)
81 parser = TorchParser()
82 graph = parser(_get_module_name(module) if graph_name is None else graph_name,
---> 83 module, input_args)
84 if enable_opt:
85 graph = quant_optimize(graph)

/opt/vitis_ai/conda/envs/vitis-ai-pytorch/lib/python3.7/site-packages/pytorch_nndct/parse/parser.py in call(self, graph_name, module, input_args)
75 unknown_op_type_check(nndct_graph)
76 self._convert_blob_tensor_type(nndct_graph)
---> 77 self._load_data(nndct_graph, module)
78 if NndctOption.nndct_parse_debug.value >= 2:
79 NndctDebugLogger.write(f"nndct raw graph:\n{nndct_graph}")

/opt/vitis_ai/conda/envs/vitis-ai-pytorch/lib/python3.7/site-packages/pytorch_nndct/parse/parser.py in _load_data(graph, module)
344 else:
345 for param_name, tensor in node.op.params.items():
--> 346 data = module.state_dict()[get_short_name(tensor.name)].cpu().numpy()
347 tensor.from_ndarray(data)
348 tensor = tensor_util.convert_parameter_tensor_format(

KeyError: '1504'
出现这个错误,但是前提是vit模型,并且已经确保了模型的参数和结构一致,那么出现这种情况的原因是什么呢?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant