update_batch_size
in arch config
#123
Replies: 1 comment 2 replies
-
Hi, great question. Basically, the point of the update batch size is to control how many duplications of the algorithm are performed on a single device. It is the single device equivalent of pmapping the learn function over each device. Since it's on one device, we can choose how many duplications we want to perform. The reason for this control is twofold, 1. To match the design outlined in the actual podracer paper that created this system design and 2. To allow flexibility over how you scale the algorithm. Certain things can change depending on the algorithm and network and environment you are using. Like maybe the environment is the bottleneck, maybe the network or the update function etc. Does that make sense? There is also an illustration of the update_batch_size in the anakin diagram in the readme. |
Beta Was this translation helpful? Give feedback.
-
Hello,
I wanted to ask some information about an implementation detail
Is it possible to give an insights about the reason of the parameter
update_batch_size
in the config ?I am using the ddqn system and it seems like it just split the batch size in chunk before mapping them, which seems equivalent to just not vmap it to me. Would it be possible to give explanation about why is it done this way ?
thanks !
Beta Was this translation helpful? Give feedback.
All reactions