1、理论原理
超参数搜索:
https://blog.csdn.net/caoyuan666/article/details/105933836
2、sklearn封装keras模型
- 1、转化为sklearn的model
- 2、定义参数集合
- 3、搜索参数
def build_model(hidden_layers = 1,
layer_size = 30,
learning_rate = 3e-3):
model = keras.models.Sequential()
model.add(keras.layers.Dense(layer_size,activation='relu',
input_shape=x_train_scaled.shape[1:]))
for _ in range(hidden_layers-1):
model.add(keras.layers.Dense(layer_size,
activation='relu'))
model.add(keras.layers.Dense(1))
optimizer = keras.optimizers.SGD(learning_rate)
model.compile(loss='mse',optimizer=optimizer)
return model
sklearn_model=keras.wrappers.scikit_learn.KerasRegressor(build_model)
callbacks=[keras.callbacks.EarlyStopping(patience=5,min_delta=1e-2)]
history=sklearn_model.fit(x_train_scaled,y_train,
epochs=10,
validation_data=(x_valid_scaled,y_valid),
callbacks = callbacks )
3、sklearn超参数搜索
对于学习率,采用一个分布,只需给定最小值和最大值,具体数字在这个区间随意 选取
- f(x)= 1/(x*log(b/a)) a<=x<=b
from scipy.stats import reciprocal
param_distribution = {
"hidden_layers":[1,2,3,4],
"layer_size":np.arange(100),
"learning_rate":reciprocal(1e-4,1e-2)
}
from sklearn.model_selection import RandomizedSearchCV
#n_iter:从参数集合中选取多少组
#n_jobs:多少个任务并行处理
random_search_cv = RandomizedSearchCV(sklearn_model,
param_distribution,
n_iter=10,
cv = 4,
n_jobs=1)
random_search_cv.fit(x_train_scaled,y_train,
epochs=10,
validation_data=(x_valid_scaled,y_valid),
callbacks = callbacks)
RandomizedSearchCV
详见论文:Random Search for Hyper-Parameter Optimization
考察其源代码,其搜索策略如下:
- (a)对于搜索范围是distribution的超参数,根据给定的distribution随机采样;
- (b)对于搜索范围是list的超参数,在给定的list中等概率采样;
- (c)对a、b两步中得到的n_iter组采样结果,进行遍历。
- (补充)如果给定的搜索范围均为list,则不放回抽样n_iter次。
参数:
- sklearn_model:之前定义好的sklearn模型,在上一章代码中
- param_distribution:超参数字典
- n_iter:从参数集合中选取多少组
- n_jobs:多少个任务并行处理
- cv:将训练集平均分为n份,n-1份用来训练,一份用来测试,这里可以通过参数cv调整,默认为3,即n=3
4、结果显示
print(random_search_cv.best_params_)
print(random_search_cv.best_score_)
print(random_search_cv.best_estimator_)
查看最好参数,最好得分
{'hidden_layers': 4, 'layer_size': 5, 'learning_rate': 0.004605738848316287}
-0.4479178825629158
<tensorflow.python.keras.wrappers.scikit_learn.KerasRegressor object at 0x000001B68BB68548>
通过最好模型来预测测试集:
model = random_search_cv.best_estimator_.model
model.evaluate(x_test_scaled,y_test)