You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
hparams.num_gpus set 2
run scripts/train.sh exp_name ./config/train.yaml 2 1
throw errors Cowardly refusing to serialize non-leaf tensor which requires_grad, since autograd
The original logic is required to grad back to the derivative
In multi-threaded parallelism serialize a tensor requires gradient calculation, the calculation graph is separated, autograd does not support to do synchronization between multiple processes
I can find this tensor, but this tensor originally needs gradient calculation, if I set it to require_grads=False, won't it change the original logic. At present, the desired state is to support multiple processes in parallel, but also support the autograd of this tensor
The current problem is that autograd does not support synchronization between multiple processes
The text was updated successfully, but these errors were encountered:
hparams.num_gpus set 2
run
scripts/train.sh exp_name ./config/train.yaml 2 1
throw errors
Cowardly refusing to serialize non-leaf tensor which requires_grad, since autograd
The original logic is required to grad back to the derivative
In multi-threaded parallelism serialize a tensor requires gradient calculation, the calculation graph is separated, autograd does not support to do synchronization between multiple processes
I can find this tensor, but this tensor originally needs gradient calculation, if I set it to require_grads=False, won't it change the original logic. At present, the desired state is to support multiple processes in parallel, but also support the autograd of this tensor
The current problem is that autograd does not support synchronization between multiple processes
The text was updated successfully, but these errors were encountered: