section 02: My problem is, I am not getting how optimization and back propogation are implemented via codes. #1121
VivekChimman
started this conversation in
General
Replies: 1 comment 2 replies
-
Learn about gradient descent from linear regression and maths behind it. You will get gist of what is happening behind. Then watch Andrej Karpathy micrograd vid on YouTube. |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Here in training loop
optimizer.zero_grad()
,loss.backward()
,optimizer.step()
look like not connected with each other then how they are helping model to get trained on the data via optimization and back propogation? What's happening there?Beta Was this translation helpful? Give feedback.
All reactions