Multi-round inference and GPU memory usage #1349
-
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Hallo @ali-akhavan89 , the reason is that each round new simulations are added to the inference object and saved internally here: sbi/sbi/inference/trainers/npe/npe_base.py Lines 191 to 193 in 9152e93 In each round, all data from all previous rounds is then used for training, see here: sbi/sbi/inference/trainers/npe/npe_base.py Lines 294 to 315 in 9152e93 There are ways to circumvent this, e.g., by resetting A more convenient way for more control might be implementing your own training loop like described here: https://github.com/sbi-dev/sbi/blob/main/tutorials/18_training_interface.ipynb Does this help? |
Beta Was this translation helpful? Give feedback.
Hallo @ali-akhavan89 ,
the reason is that each round new simulations are added to the inference object and saved internally here:
sbi/sbi/inference/trainers/npe/npe_base.py
Lines 191 to 193 in 9152e93
In each round, all data from all previous rounds is then used for training, see here:
sbi/sbi/inference/trainers/npe/npe_base.py
Lines 294 to 315 in 9152e93