You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Nov 14, 2023. It is now read-only.
I'm using tune_sklearn.TuneSearchCV to tune a fairly large number of hyperparameters using the "hyperopt" optimization method. When I look at the results in the cv_results_ attribute, the model performance does not improve over the course of hyperparameter tuning. I also see that "training_iteration" is always 1.
Does this mean that TuneSearchCV is not taking prior search trials into account? Is it essentially behaving like a random search, and ignoring the Bayesian optimization?
Here's an illustration of the kind of code I'm using:
estimator = << an sklearn pipeline >>
search_spaces = {
"mdl__alpha": ray.tune.loguniform(1e-7, 1e2) # Lots of hyperparameters like this
}
search = TuneSearchCV(
estimator,
search_spaces,
search_optimization="hyperopt",
n_trials=64,
n_jobs=7,
scoring="balanced_accuracy"
refit=True)
search.fit(X, y) # X and y are the data and labels
cv_results = pd.DataFrame(search.cv_results_) # "search" is a fitted instance of TuneSearchCV
cv_results['training_iteration']
## RESULT
0 1
1 1
2 1
3 1
4 1
..
59 1
60 1
61 1
62 1
63 1
Name: training_iteration, Length: 64, dtype: int64
cv_results['mean_test_balanced_accuracy']
## RESULT
0 0.773810
1 0.761905
2 0.654762
3 0.714286
4 0.790476
...
59 0.747619
60 0.733333
61 0.771429
62 0.680952
63 0.745238
Name: mean_test_balanced_accuracy, Length: 64, dtype: float64
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
I'm using
tune_sklearn.TuneSearchCV
to tune a fairly large number of hyperparameters using the "hyperopt" optimization method. When I look at the results in thecv_results_
attribute, the model performance does not improve over the course of hyperparameter tuning. I also see that "training_iteration" is always 1.Does this mean that
TuneSearchCV
is not taking prior search trials into account? Is it essentially behaving like a random search, and ignoring the Bayesian optimization?Here's an illustration of the kind of code I'm using:
The text was updated successfully, but these errors were encountered: