Typo in 02_pytorch_classification.ipynb #1043
Caesar0714
started this conversation in
General
Replies: 1 comment 1 reply
-
Hey @Caesar0714 , That's a great point! However, the part about "just after the output layer" refers to activation functions such as the For example: outputs = model(x)
prediction_probabilities = torch.softmax(outputs, dim=1) But you are right, you can use non-linear activations throughout the neural network before the output layer as well. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi there,
I just want to mention that there is a typo in the part under the question, I think the activation layer is usually put just before the output layer instead of after.
Best,
Caesar
Beta Was this translation helpful? Give feedback.
All reactions