You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I want to do something like copy a node from a gmodule called net_A to another gmodule called net_B,so I do such operations netB.modules[i] = netA.modules[counter]
and when the forward in netB, it failed.And the error imformation like that:
torch/install/bin/luajit: ...torch/install/share/lua/5.1/cudnn/SpatialConvolution.lua:32: Only Cuda supported duh!
stack traceback:
[C]: in function 'assert'
...torch/install/share/lua/5.1/cudnn/SpatialConvolution.lua:32: in function 'resetWeightDescriptors'
...torch/install/share/lua/5.1/cudnn/SpatialConvolution.lua:96: in function 'checkInputChanged'
...torch/install/share/lua/5.1/cudnn/SpatialConvolution.lua:120: in function 'createIODescriptors'
...torch/install/share/lua/5.1/cudnn/SpatialConvolution.lua:188: in function 'func'
/home/zsf/torch/install/share/lua/5.1/nngraph/gmodule.lua:345: in function 'neteval'
/home/zsf/torch/install/share/lua/5.1/nngraph/gmodule.lua:380: in function 'forward'
test3.lua:335: in main chunk
[C]: in function 'dofile'
.../zsf/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: at 0x00406670
So I try to copy the node.weight from netA to netB,like that netB.modules[i].weight=netA.modules[counter].weight netB.modules[i].bias=netA.modules[counter].bias
But the result after netB:forward(input) is not so good. Is my copy operation something wrong?
Thank you for answer!
The text was updated successfully, but these errors were encountered:
Hi,
I don't have an answer to your question but I am facing a similar issue.
I am trying to copy weights from a trained nngraph model to a new nngraph model (with almost same structure) that I created from file. When I do a forward pass results are all wrong and I can't figure out why. For copying weights I used
params0, gp0 = model0:parameters()
params1, gp1= model1:parameters()
and then serially copied all weight/bias tensors in wt to the params tensors on model 1 using the tensor1:copy(tensor2).
I later verified if the weights of two models are the same, and if the inputs are getting passed correctly.
I don't know why but some of the intermediate layer activations in the forward pass are growing really large and then going to all 0's.
I want to do something like copy a node from a gmodule called net_A to another gmodule called net_B,so I do such operations
netB.modules[i] = netA.modules[counter]
and when the forward in netB, it failed.And the error imformation like that:
torch/install/bin/luajit: ...torch/install/share/lua/5.1/cudnn/SpatialConvolution.lua:32: Only Cuda supported duh!
stack traceback:
[C]: in function 'assert'
...torch/install/share/lua/5.1/cudnn/SpatialConvolution.lua:32: in function 'resetWeightDescriptors'
...torch/install/share/lua/5.1/cudnn/SpatialConvolution.lua:96: in function 'checkInputChanged'
...torch/install/share/lua/5.1/cudnn/SpatialConvolution.lua:120: in function 'createIODescriptors'
...torch/install/share/lua/5.1/cudnn/SpatialConvolution.lua:188: in function 'func'
/home/zsf/torch/install/share/lua/5.1/nngraph/gmodule.lua:345: in function 'neteval'
/home/zsf/torch/install/share/lua/5.1/nngraph/gmodule.lua:380: in function 'forward'
test3.lua:335: in main chunk
[C]: in function 'dofile'
.../zsf/torch/install/lib/luarocks/rocks/trepl/scm-1/bin/th:150: in main chunk
[C]: at 0x00406670
So I try to copy the node.weight from netA to netB,like that
netB.modules[i].weight=netA.modules[counter].weight
netB.modules[i].bias=netA.modules[counter].bias
But the result after netB:forward(input) is not so good. Is my copy operation something wrong?
Thank you for answer!
The text was updated successfully, but these errors were encountered: