我正在使用 ElasticNet 优化模型,但出现了一些奇怪的行为。当我用一个小值设置公差超参数时,我得到
"ConvergenceWarning: Objective did not converge"
错误。所以我尝试了更大的容差值,收敛误差消失了,但现在测试数据始终给出更高的均方根误差值。这对我来说似乎倒退了,如果模型不收敛,是什么导致它给出更好的 RMSE 分数,甚至给出一致的分数?我在 gridsearch 函数中使用它:
没有错误,分数差:
GridSearchCV(cv=KFold(n_splits=4, random_state=None, shuffle=True),
estimator=ElasticNet(),
param_grid={'alpha': [0.01], 'fit_intercept': [True],
'l1_ratio': [0.75, 0.8, 0.85, 0.9],
'max_iter': range(350, 450, 50), 'normalize': [False],
'random_state': [3], 'selection': ['random'],
'tol': [0.1]},
refit='nmse', return_train_score=True,
scoring={'mae': 'neg_mean_absolute_error',
'nmse': 'neg_mean_squared_error',
'nmsle': 'neg_mean_squared_log_error', 'r2': 'r2'})
Running GridSearchCV:
C:\miniconda3\envs\tf\lib\site-packages\sklearn\linear_model\_coordinate_descent.py:529: ConvergenceWarning: Objective did not converge. You might want to increase the number of iterations. Duality gap: 1253.5072980364057, tolerance: 848.4484859258542
positive)
错误,好分数:
GridSearchCV(cv=KFold(n_splits=4, random_state=None, shuffle=True),
estimator=ElasticNet(),
param_grid={'alpha': [0.01], 'fit_intercept': [True],
'l1_ratio': [0.75, 0.8, 0.85, 0.9],
'max_iter': range(350, 450, 50), 'normalize': [False],
'random_state': [3], 'selection': ['random'],
'tol': [0.01]},
refit='nmse', return_train_score=True,
scoring={'mae': 'neg_mean_absolute_error',
'nmse': 'neg_mean_squared_error',
'nmsle': 'neg_mean_squared_log_error', 'r2': 'r2'})