We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hello, I have a problem when I try to run the inference code with pre trained model, I get the following error:
【Solver】 ********* [load] *********** 01/28 07:21:45 PM (Elapsed: 00:00:03) loading the model from /content/MediumVC/Any2Any/model/checkpoint-3000.pt Traceback (most recent call last): File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "/usr/lib/python3.7/runpy.py", line 85, in _run_code exec(code, run_globals) File "/content/MediumVC/Any2Any/infer/infer.py", line 95, in <module> solver = Solver(config) File "/content/MediumVC/Any2Any/infer/infer.py", line 28, in __init__ self.resume_model(self.config['resume_path']) File "/content/MediumVC/Any2Any/infer/infer.py", line 56, in resume_model self.Generator.load_state_dict(checkpoint['Generator']) File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1483, in load_state_dict self.__class__.__name__, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for MagicModel: Missing key(s) in state_dict: "any2one.encoder.pre_block.0.conv_block1.conv_block.conv0.bias", "any2one.encoder.pre_block.0.conv_block1.conv_block.conv0.weight", "any2one.encoder.pre_block.0.conv_block2.conv_block.conv0.bias", "any2one.encoder.pre_block.0.conv_block2.conv_block.conv0.weight", "any2one.encoder.pre_block.0.adjust_dim_layer.bias", "any2one.encoder.pre_block.0.adjust_dim_layer.weight", "any2one.encoder.pre_block.1.conv_block1.conv_block.conv0.bias", "any2one.encoder.pre_block.1.conv_block1.conv_block.conv0.weight", "any2one.encoder.pre_block.1.conv_block2.conv_block.conv0.bias", "any2one.encoder.pre_block.1.conv_block2.conv_block.conv0.weight", "any2one.encoder.pre_block.1.adjust_dim_layer.bias", "any2one.encoder.pre_block.1.adjust_dim_layer.weight", "any2one.encoder.pre_block.2.conv_block1.conv_block.conv0.bias", "any2one.encoder.pre_block.2.conv_block1.conv_block.conv0.weight", "any2one.encoder.pre_block.2.conv_block2.conv_block.conv0.bias", "any2one.encoder.pre_block.2.conv_block2.conv_block.conv0.weight", "any2one.encoder.pre_block.2.adjust_dim_layer.bias", "any2one.encoder.pre_block.2.adjust_dim_layer.weight", "any2one.encoder.post_block.0.cross_attn.in_proj_weight", "any2one.encoder.post_block.0.cross_attn.in_proj_bias", "any2one.encoder.post_block.0.cross_attn.out_proj.weight", "any2one.encoder.post_block.0.cross_attn.out_proj.bias", "any2one.decoder.pre_conv_block.0.conv_block1.conv_block.conv0.bias", "any2one.decoder.pre_conv_block.0.conv_block1.conv_block.conv0.weight", "any2one.decoder.pre_conv_block.0.conv_block2.conv_block.conv0.bias", "any2one.decoder.pre_conv_block.0.conv_block2.conv_block.conv0.weight", "any2one.decoder.pre_conv_block.0.adjust_dim_layer.bias", "any2one.decoder.pre_conv_block.0.adjust_dim_layer.weight", "any2one.decoder.pre_attention_block.0.cross_attn.in_proj_weight", "any2one.decoder.pre_attention_block.0.cross_attn.in_proj_bias", "any2one.decoder.pre_attention_block.0.cross_attn.out_proj.weight", "any2one.decoder.pre_attention_block.0.cross_attn.out_proj.bias", "any2one.decoder.mel_linear1.weight", "any2one.decoder.mel_linear1.bias", "any2one.decoder.mel_linear2.weight", "any2one.decoder.mel_linear2.bias", "any2one.decoder.smoothers.0.self_attn.in_proj_weight", "any2one.decoder.smoothers.0.self_attn.in_proj_bias", "any2one.decoder.smoothers.0.self_attn.out_proj.weight", "any2one.decoder.smoothers.0.self_attn.out_proj.bias", "any2one.decoder.smoothers.0.conv0.bias", "any2one.decoder.smoothers.0.conv0.weight", "any2one.decoder.smoothers.0.conv1.bias", "any2one.decoder.smoothers.0.conv1.weight", "any2one.decoder.smoothers.1.self_attn.in_proj_weight", "any2one.decoder.smoothers.1.self_attn.in_proj_bias", "any2one.decoder.smoothers.1.self_attn.out_proj.weight", "any2one.decoder.smoothers.1.self_attn.out_proj.bias", "any2one.decoder.smoothers.1.conv0.bias", "any2one.decoder.smoothers.1.conv0.weight", "any2one.decoder.smoothers.1.conv1.bias", "any2one.decoder.smoothers.1.conv1.weight", "any2one.decoder.smoothers.2.self_attn.in_proj_weight", "any2one.decoder.smoothers.2.self_attn.in_proj_bias", "any2one.decoder.smoothers.2.self_attn.out_proj.weight", "any2one.decoder.smoothers.2.self_attn.out_proj.bias", "any2one.decoder.smoothers.2.conv0.bias", "any2one.decoder.smoothers.2.conv0.weight", "any2one.decoder.smoothers.2.conv1.bias", "any2one.decoder.smoothers.2.conv1.weight", "any2one.decoder.post_block.0.conv_block1.conv_block.conv0.bias", "any2one.decoder.post_block.0.conv_block1.conv_block.conv0.weight", "any2one.decoder.post_block.0.conv_block2.conv_block.conv0.bias", "any2one.decoder.post_block.0.conv_block2.conv_block.conv0.weight", "any2one.decoder.post_block.0.adjust_dim_layer.bias", "any2one.decoder.post_block.0.adjust_dim_layer.weight", "any2one.decoder.post_block.1.conv_block1.conv_block.conv0.bias", "any2one.decoder.post_block.1.conv_block1.conv_block.conv0.weight", "any2one.decoder.post_block.1.conv_block2.conv_block.conv0.bias", "any2one.decoder.post_block.1.conv_block2.conv_block.conv0.weight", "any2one.decoder.post_block.1.adjust_dim_layer.bias", "any2one.decoder.post_block.1.adjust_dim_layer.weight", "any2one.decoder.post_block.2.conv_block1.conv_block.conv0.bias", "any2one.decoder.post_block.2.conv_block1.conv_block.conv0.weight", "any2one.decoder.post_block.2.conv_block2.conv_block.conv0.bias", "any2one.decoder.post_block.2.conv_block2.conv_block.conv0.weight", "any2one.decoder.post_block.2.adjust_dim_layer.bias", "any2one.decoder.post_block.2.adjust_dim_layer.weight", "any2one.decoder.post_block.3.conv_block1.conv_block.conv0.bias", "any2one.decoder.post_block.3.conv_block1.conv_block.conv0.weight", "any2one.decoder.post_block.3.conv_block2.conv_block.conv0.bias", "any2one.decoder.post_block.3.conv_block2.conv_block.conv0.weight", "any2one.decoder.post_block.3.adjust_dim_layer.bias", "any2one.decoder.post_block.3.adjust_dim_layer.weight", "cont_encoder.conv_block0.0.conv_block1.conv_block.conv0.bias", "cont_encoder.conv_block0.0.conv_block1.conv_block.conv0.weight_g", "cont_encoder.conv_block0.0.conv_block1.conv_block.conv0.weight_v", "cont_encoder.conv_block0.0.conv_block2.conv_block.conv0.bias", "cont_encoder.conv_block0.0.conv_block2.conv_block.conv0.weight_g", "cont_encoder.conv_block0.0.conv_block2.conv_block.conv0.weight_v", "cont_encoder.conv_block0.0.adjust_dim_layer.bias", "cont_encoder.conv_block0.0.adjust_dim_layer.weight_g", "cont_encoder.conv_block0.0.adjust_dim_layer.weight_v", "cont_encoder.conv_block0.1.conv_block1.conv_block.conv0.bias", "cont_encoder.conv_block0.1.conv_block1.conv_block.conv0.weight_g", "cont_encoder.conv_block0.1.conv_block1.conv_block.conv0.weight_v", "cont_encoder.conv_block0.1.conv_block2.conv_block.conv0.bias", "cont_encoder.conv_block0.1.conv_block2.conv_block.conv0.weight_g", "cont_encoder.conv_block0.1.conv_block2.conv_block.conv0.weight_v", "cont_encoder.conv_block0.1.adjust_dim_layer.bias", "cont_encoder.conv_block0.1.adjust_dim_layer.weight_g", "cont_encoder.conv_block0.1.adjust_dim_layer.weight_v", "cont_encoder.attention_norm0.0.cross_attn.in_proj_weight", "cont_encoder.attention_norm0.0.cross_attn.in_proj_bias", "cont_encoder.attention_norm0.0.cross_attn.out_proj.weight", "cont_encoder.attention_norm0.0.cross_attn.out_proj.bias", "cont_encoder.conv_block1.0.conv_block1.conv_block.conv0.bias", "cont_encoder.conv_block1.0.conv_block1.conv_block.conv0.weight_g", "cont_encoder.conv_block1.0.conv_block1.conv_block.conv0.weight_v", "cont_encoder.conv_block1.0.conv_block2.conv_block.conv0.bias", "cont_encoder.conv_block1.0.conv_block2.conv_block.conv0.weight_g", "cont_encoder.conv_block1.0.conv_block2.conv_block.conv0.weight_v", "cont_encoder.conv_block1.0.adjust_dim_layer.bias", "cont_encoder.conv_block1.0.adjust_dim_layer.weight_g", "cont_encoder.conv_block1.0.adjust_dim_layer.weight_v", "cont_encoder.conv_block1.1.conv_block1.conv_block.conv0.bias", "cont_encoder.conv_block1.1.conv_block1.conv_block.conv0.weight_g", "cont_encoder.conv_block1.1.conv_block1.conv_block.conv0.weight_v", "cont_encoder.conv_block1.1.conv_block2.conv_block.conv0.bias", "cont_encoder.conv_block1.1.conv_block2.conv_block.conv0.weight_g", "cont_encoder.conv_block1.1.conv_block2.conv_block.conv0.weight_v", "cont_encoder.conv_block1.1.adjust_dim_layer.bias", "cont_encoder.conv_block1.1.adjust_dim_layer.weight_g", "cont_encoder.conv_block1.1.adjust_dim_layer.weight_v", "cont_encoder.attention_norm1.0.cross_attn.in_proj_weight", "cont_encoder.attention_norm1.0.cross_attn.in_proj_bias", "cont_encoder.attention_norm1.0.cross_attn.out_proj.weight", "cont_encoder.attention_norm1.0.cross_attn.out_proj.bias", "generator.pre_block0.0.conv_block1.conv_block.conv0.bias", "generator.pre_block0.0.conv_block1.conv_block.conv0.weight_g", "generator.pre_block0.0.conv_block1.conv_block.conv0.weight_v", "generator.pre_block0.0.conv_block2.conv_block.conv0.bias", "generator.pre_block0.0.conv_block2.conv_block.conv0.weight_g", "generator.pre_block0.0.conv_block2.conv_block.conv0.weight_v", "generator.pre_block0.0.adjust_dim_layer.bias", "generator.pre_block0.0.adjust_dim_layer.weight_g", "generator.pre_block0.0.adjust_dim_layer.weight_v", "generator.pre_block0.1.conv_block1.conv_block.conv0.bias", "generator.pre_block0.1.conv_block1.conv_block.conv0.weight_g", "generator.pre_block0.1.conv_block1.conv_block.conv0.weight_v", "generator.pre_block0.1.conv_block2.conv_block.conv0.bias", "generator.pre_block0.1.conv_block2.conv_block.conv0.weight_g", "generator.pre_block0.1.conv_block2.conv_block.conv0.weight_v", "generator.pre_block0.1.adjust_dim_layer.bias", "generator.pre_block0.1.adjust_dim_layer.weight_g", "generator.pre_block0.1.adjust_dim_layer.weight_v", "generator.attention0.cross_attn.in_proj_weight", "generator.attention0.cross_attn.in_proj_bias", "generator.attention0.cross_attn.out_proj.weight", "generator.attention0.cross_attn.out_proj.bias", "generator.pre_block1.0.conv_block1.conv_block.conv0.bias", "generator.pre_block1.0.conv_block1.conv_block.conv0.weight_g", "generator.pre_block1.0.conv_block1.conv_block.conv0.weight_v", "generator.pre_block1.0.conv_block2.conv_block.conv0.bias", "generator.pre_block1.0.conv_block2.conv_block.conv0.weight_g", "generator.pre_block1.0.conv_block2.conv_block.conv0.weight_v", "generator.pre_block1.0.adjust_dim_layer.bias", "generator.pre_block1.0.adjust_dim_layer.weight_g", "generator.pre_block1.0.adjust_dim_layer.weight_v", "generator.pre_block1.1.conv_block1.conv_block.conv0.bias", "generator.pre_block1.1.conv_block1.conv_block.conv0.weight_g", "generator.pre_block1.1.conv_block1.conv_block.conv0.weight_v", "generator.pre_block1.1.conv_block2.conv_block.conv0.bias", "generator.pre_block1.1.conv_block2.conv_block.conv0.weight_g", "generator.pre_block1.1.conv_block2.conv_block.conv0.weight_v", "generator.pre_block1.1.adjust_dim_layer.bias", "generator.pre_block1.1.adjust_dim_layer.weight_g", "generator.pre_block1.1.adjust_dim_layer.weight_v", "generator.attention1.cross_attn.in_proj_weight", "generator.attention1.cross_attn.in_proj_bias", "generator.attention1.cross_attn.out_proj.weight", "generator.attention1.cross_attn.out_proj.bias", "generator.smoothers.0.self_attn.in_proj_weight", "generator.smoothers.0.self_attn.in_proj_bias", "generator.smoothers.0.self_attn.out_proj.weight", "generator.smoothers.0.self_attn.out_proj.bias", "generator.smoothers.0.conv0.bias", "generator.smoothers.0.conv0.weight_g", "generator.smoothers.0.conv0.weight_v", "generator.smoothers.0.conv1.bias", "generator.smoothers.0.conv1.weight_g", "generator.smoothers.0.conv1.weight_v", "generator.smoothers.1.self_attn.in_proj_weight", "generator.smoothers.1.self_attn.in_proj_bias", "generator.smoothers.1.self_attn.out_proj.weight", "generator.smoothers.1.self_attn.out_proj.bias", "generator.smoothers.1.conv0.bias", "generator.smoothers.1.conv0.weight_g", "generator.smoothers.1.conv0.weight_v", "generator.smoothers.1.conv1.bias", "generator.smoothers.1.conv1.weight_g", "generator.smoothers.1.conv1.weight_v", "generator.smoothers.2.self_attn.in_proj_weight", "generator.smoothers.2.self_attn.in_proj_bias", "generator.smoothers.2.self_attn.out_proj.weight", "generator.smoothers.2.self_attn.out_proj.bias", "generator.smoothers.2.conv0.bias", "generator.smoothers.2.conv0.weight_g", "generator.smoothers.2.conv0.weight_v", "generator.smoothers.2.conv1.bias", "generator.smoothers.2.conv1.weight_g", "generator.smoothers.2.conv1.weight_v", "generator.post_block.0.conv_block1.conv_block.conv0.bias", "generator.post_block.0.conv_block1.conv_block.conv0.weight_g", "generator.post_block.0.conv_block1.conv_block.conv0.weight_v", "generator.post_block.0.conv_block2.conv_block.conv0.bias", "generator.post_block.0.conv_block2.conv_block.conv0.weight_g", "generator.post_block.0.conv_block2.conv_block.conv0.weight_v", "generator.post_block.0.adjust_dim_layer.bias", "generator.post_block.0.adjust_dim_layer.weight_g", "generator.post_block.0.adjust_dim_layer.weight_v", "generator.post_block.1.conv_block1.conv_block.conv0.bias", "generator.post_block.1.conv_block1.conv_block.conv0.weight_g", "generator.post_block.1.conv_block1.conv_block.conv0.weight_v", "generator.post_block.1.conv_block2.conv_block.conv0.bias", "generator.post_block.1.conv_block2.conv_block.conv0.weight_g", "generator.post_block.1.conv_block2.conv_block.conv0.weight_v", "generator.post_block.1.adjust_dim_layer.bias", "generator.post_block.1.adjust_dim_layer.weight_g", "generator.post_block.1.adjust_dim_layer.weight_v", "generator.post_block.2.conv_block1.conv_block.conv0.bias", "generator.post_block.2.conv_block1.conv_block.conv0.weight_g", "generator.post_block.2.conv_block1.conv_block.conv0.weight_v", "generator.post_block.2.conv_block2.conv_block.conv0.bias", "generator.post_block.2.conv_block2.conv_block.conv0.weight_g", "generator.post_block.2.conv_block2.conv_block.conv0.weight_v", "generator.post_block.2.adjust_dim_layer.bias", "generator.post_block.2.adjust_dim_layer.weight_g", "generator.post_block.2.adjust_dim_layer.weight_v", "generator.post_block.3.conv_block1.conv_block.conv0.bias", "generator.post_block.3.conv_block1.conv_block.conv0.weight_g", "generator.post_block.3.conv_block1.conv_block.conv0.weight_v", "generator.post_block.3.conv_block2.conv_block.conv0.bias", "generator.post_block.3.conv_block2.conv_block.conv0.weight_g", "generator.post_block.3.conv_block2.conv_block.conv0.weight_v", "generator.post_block.3.adjust_dim_layer.bias", "generator.post_block.3.adjust_dim_layer.weight_g", "generator.post_block.3.adjust_dim_layer.weight_v", "generator.post_block.4.conv_block1.conv_block.conv0.bias", "generator.post_block.4.conv_block1.conv_block.conv0.weight_g", "generator.post_block.4.conv_block1.conv_block.conv0.weight_v", "generator.post_block.4.conv_block2.conv_block.conv0.bias", "generator.post_block.4.conv_block2.conv_block.conv0.weight_g", "generator.post_block.4.conv_block2.conv_block.conv0.weight_v", "generator.post_block.4.adjust_dim_layer.bias", "generator.post_block.4.adjust_dim_layer.weight_g", "generator.post_block.4.adjust_dim_layer.weight_v". Unexpected key(s) in state_dict: "encoder.pre_block.0.conv_block1.conv_block.conv0.bias", "encoder.pre_block.0.conv_block1.conv_block.conv0.weight_g", "encoder.pre_block.0.conv_block1.conv_block.conv0.weight_v", "encoder.pre_block.0.conv_block2.conv_block.conv0.bias", "encoder.pre_block.0.conv_block2.conv_block.conv0.weight_g", "encoder.pre_block.0.conv_block2.conv_block.conv0.weight_v", "encoder.pre_block.0.adjust_dim_layer.bias", "encoder.pre_block.0.adjust_dim_layer.weight_g", "encoder.pre_block.0.adjust_dim_layer.weight_v", "encoder.pre_block.1.conv_block1.conv_block.conv0.bias", "encoder.pre_block.1.conv_block1.conv_block.conv0.weight_g", "encoder.pre_block.1.conv_block1.conv_block.conv0.weight_v", "encoder.pre_block.1.conv_block2.conv_block.conv0.bias", "encoder.pre_block.1.conv_block2.conv_block.conv0.weight_g", "encoder.pre_block.1.conv_block2.conv_block.conv0.weight_v", "encoder.pre_block.1.adjust_dim_layer.bias", "encoder.pre_block.1.adjust_dim_layer.weight_g", "encoder.pre_block.1.adjust_dim_layer.weight_v", "encoder.pre_block.2.conv_block1.conv_block.conv0.bias", "encoder.pre_block.2.conv_block1.conv_block.conv0.weight_g", "encoder.pre_block.2.conv_block1.conv_block.conv0.weight_v", "encoder.pre_block.2.conv_block2.conv_block.conv0.bias", "encoder.pre_block.2.conv_block2.conv_block.conv0.weight_g", "encoder.pre_block.2.conv_block2.conv_block.conv0.weight_v", "encoder.pre_block.2.adjust_dim_layer.bias", "encoder.pre_block.2.adjust_dim_layer.weight_g", "encoder.pre_block.2.adjust_dim_layer.weight_v", "encoder.post_block.0.cross_attn.in_proj_weight", "encoder.post_block.0.cross_attn.in_proj_bias", "encoder.post_block.0.cross_attn.out_proj.weight", "encoder.post_block.0.cross_attn.out_proj.bias", "decoder.pre_conv_block.0.conv_block1.conv_block.conv0.bias", "decoder.pre_conv_block.0.conv_block1.conv_block.conv0.weight_g", "decoder.pre_conv_block.0.conv_block1.conv_block.conv0.weight_v", "decoder.pre_conv_block.0.conv_block2.conv_block.conv0.bias", "decoder.pre_conv_block.0.conv_block2.conv_block.conv0.weight_g", "decoder.pre_conv_block.0.conv_block2.conv_block.conv0.weight_v", "decoder.pre_conv_block.0.adjust_dim_layer.bias", "decoder.pre_conv_block.0.adjust_dim_layer.weight_g", "decoder.pre_conv_block.0.adjust_dim_layer.weight_v", "decoder.pre_attention_block.0.cross_attn.in_proj_weight", "decoder.pre_attention_block.0.cross_attn.in_proj_bias", "decoder.pre_attention_block.0.cross_attn.out_proj.weight", "decoder.pre_attention_block.0.cross_attn.out_proj.bias", "decoder.mel_linear1.weight", "decoder.mel_linear1.bias", "decoder.mel_linear2.weight", "decoder.mel_linear2.bias", "decoder.smoothers.0.self_attn.in_proj_weight", "decoder.smoothers.0.self_attn.in_proj_bias", "decoder.smoothers.0.self_attn.out_proj.weight", "decoder.smoothers.0.self_attn.out_proj.bias", "decoder.smoothers.0.conv0.bias", "decoder.smoothers.0.conv0.weight_g", "decoder.smoothers.0.conv0.weight_v", "decoder.smoothers.0.conv1.bias", "decoder.smoothers.0.conv1.weight_g", "decoder.smoothers.0.conv1.weight_v", "decoder.smoothers.1.self_attn.in_proj_weight", "decoder.smoothers.1.self_attn.in_proj_bias", "decoder.smoothers.1.self_attn.out_proj.weight", "decoder.smoothers.1.self_attn.out_proj.bias", "decoder.smoothers.1.conv0.bias", "decoder.smoothers.1.conv0.weight_g", "decoder.smoothers.1.conv0.weight_v", "decoder.smoothers.1.conv1.bias", "decoder.smoothers.1.conv1.weight_g", "decoder.smoothers.1.conv1.weight_v", "decoder.smoothers.2.self_attn.in_proj_weight", "decoder.smoothers.2.self_attn.in_proj_bias", "decoder.smoothers.2.self_attn.out_proj.weight", "decoder.smoothers.2.self_attn.out_proj.bias", "decoder.smoothers.2.conv0.bias", "decoder.smoothers.2.conv0.weight_g", "decoder.smoothers.2.conv0.weight_v", "decoder.smoothers.2.conv1.bias", "decoder.smoothers.2.conv1.weight_g", "decoder.smoothers.2.conv1.weight_v", "decoder.post_block.0.conv_block1.conv_block.conv0.bias", "decoder.post_block.0.conv_block1.conv_block.conv0.weight_g", "decoder.post_block.0.conv_block1.conv_block.conv0.weight_v", "decoder.post_block.0.conv_block2.conv_block.conv0.bias", "decoder.post_block.0.conv_block2.conv_block.conv0.weight_g", "decoder.post_block.0.conv_block2.conv_block.conv0.weight_v", "decoder.post_block.0.adjust_dim_layer.bias", "decoder.post_block.0.adjust_dim_layer.weight_g", "decoder.post_block.0.adjust_dim_layer.weight_v", "decoder.post_block.1.conv_block1.conv_block.conv0.bias", "decoder.post_block.1.conv_block1.conv_block.conv0.weight_g", "decoder.post_block.1.conv_block1.conv_block.conv0.weight_v", "decoder.post_block.1.conv_block2.conv_block.conv0.bias", "decoder.post_block.1.conv_block2.conv_block.conv0.weight_g", "decoder.post_block.1.conv_block2.conv_block.conv0.weight_v", "decoder.post_block.1.adjust_dim_layer.bias", "decoder.post_block.1.adjust_dim_layer.weight_g", "decoder.post_block.1.adjust_dim_layer.weight_v", "decoder.post_block.2.conv_block1.conv_block.conv0.bias", "decoder.post_block.2.conv_block1.conv_block.conv0.weight_g", "decoder.post_block.2.conv_block1.conv_block.conv0.weight_v", "decoder.post_block.2.conv_block2.conv_block.conv0.bias", "decoder.post_block.2.conv_block2.conv_block.conv0.weight_g", "decoder.post_block.2.conv_block2.conv_block.conv0.weight_v", "decoder.post_block.2.adjust_dim_layer.bias", "decoder.post_block.2.adjust_dim_layer.weight_g", "decoder.post_block.2.adjust_dim_layer.weight_v", "decoder.post_block.3.conv_block1.conv_block.conv0.bias", "decoder.post_block.3.conv_block1.conv_block.conv0.weight_g", "decoder.post_block.3.conv_block1.conv_block.conv0.weight_v", "decoder.post_block.3.conv_block2.conv_block.conv0.bias", "decoder.post_block.3.conv_block2.conv_block.conv0.weight_g", "decoder.post_block.3.conv_block2.conv_block.conv0.weight_v", "decoder.post_block.3.adjust_dim_layer.bias", "decoder.post_block.3.adjust_dim_layer.weight_g", "decoder.post_block.3.adjust_dim_layer.weight_v".
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Hello,
I have a problem when I try to run the inference code with pre trained model,
I get the following error:
The text was updated successfully, but these errors were encountered: