You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
If there is only one worker ,training with EarlyStopping callback is ok. When multi workers with EarlyStopping callback doing distribute training, all workers will be hanging and waiting for synchronizing.
Expected behavior
I want the EarlyStopping callback works well not only on one worker task but also on multi workers distribute training job.
Current behavior
If there is only one worker ,training with EarlyStopping callback is ok. When multi workers with EarlyStopping callback doing distribute training, all workers will be hanging and waiting for synchronizing.
Expected behavior
I want the EarlyStopping callback works well not only on one worker task but also on multi workers distribute training job.
System information
Code to reproduce
....
callbacks_list.append(EarlyStopping(monitor="val_loss",
min_delta=self.ctx.min_delta,
patience=self.ctx.patience,
verbose=verbose,
mode="min",
baseline=None,
restore_best_weights=True)
)
....
keras_model.fit(
x=None,
y=None,
validation_data=valid_ds,
steps_per_epoch=self.ctx.steps_per_epoch,
validation_steps=self.ctx.valid_steps_per_epoch,
epochs=self.ctx.callback_num,
callbacks=callbacks_list,
checkpoint_dir=self.ctx.model_save_path,
keep_checkpoint_max=1,
verbose=0)
Willing to contribute
Yes
The text was updated successfully, but these errors were encountered: