You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The following exception not only generates a huge stack trace but also TensorSensor gives an error message augmentation indicating that Y = model(X) is the issue because it does not descend into tensor library code. It would be better to allow it to see inside the model pipeline. so that it can notice that the error is actually here:
nn.Linear(10, n_neurons)
which should be
nn.Linear(n_neurons, 10)
Here's the full example:
fromtorchimportnnn=20n_neurons=50model=nn.Sequential(
nn.Linear(784, n_neurons), # 28x28 flattened imagenn.ReLU(),
nn.Linear(10, n_neurons), # 10 output classes (0-9) <---- ooops! reverse thosenn.Softmax(dim=1)
)
X=torch.rand(n,784) # n instances of feature vectors with 784 pixelswithtsensor.clarify():
Y=model(X)
The error message we get is here:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-32-203c7ad8d609> in <module>
1 with tsensor.clarify():
----> 2 Y = model(X)
~/opt/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1050 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051 return forward_call(*input, **kwargs)
1052 # Do not call functions when jit is used
1053 full_backward_hooks, non_full_backward_hooks = [], []
~/opt/anaconda3/lib/python3.8/site-packages/torch/nn/modules/container.py in forward(self, input)
137 def forward(self, input):
138 for module in self:
--> 139 input = module(input)
140 return input
141
~/opt/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1050 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051 return forward_call(*input, **kwargs)
1052 # Do not call functions when jit is used
1053 full_backward_hooks, non_full_backward_hooks = [], []
~/opt/anaconda3/lib/python3.8/site-packages/torch/nn/modules/linear.py in forward(self, input)
94
95 def forward(self, input: Tensor) -> Tensor:
---> 96 return F.linear(input, self.weight, self.bias)
97
98 def extra_repr(self) -> str:
~/opt/anaconda3/lib/python3.8/site-packages/torch/nn/functional.py in linear(input, weight, bias)
1845 if has_torch_function_variadic(input, weight):
1846 return handle_torch_function(linear, (input, weight), input, weight, bias=bias)
-> 1847 return torch._C._nn.linear(input, weight, bias)
1848
1849
RuntimeError: mat1 and mat2 shapes cannot be multiplied (20x50 and 10x50)
Cause: model(X) tensor arg X w/shape [20, 784]
The text was updated successfully, but these errors were encountered:
The following exception not only generates a huge stack trace but also TensorSensor gives an error message augmentation indicating that
Y = model(X)
is the issue because it does not descend into tensor library code. It would be better to allow it to see inside the model pipeline. so that it can notice that the error is actually here:which should be
Here's the full example:
The error message we get is here:
The text was updated successfully, but these errors were encountered: