keras类库为深度学习模型提供了一个包装类(Wrapper),将Keras的深度学习模型包装成scikit-learn中的分类或回归模型,以便方便的使用scikit-learn中的方法和函数,对深度学习的模型是通过KerasClassifier和KerasRegressor来实现的。以下仅展示KerasClassifier的用法
from keras.models import Sequential
from keras.layers import Dense
import numpy as np
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import StratifiedKFold
from keras.wrappers.scikit_learn import KerasClassifier
构建函数创建一个简单的多层神经网络
# 构建模型
def create_model():
# 构建模型
model = Sequential()
model.add(Dense(units=12, input_dim=8, activation='relu'))
model.add(Dense(units=8, activation='relu'))
model.add(Dense(units=1, activation='sigmoid'))
# 编译模型
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
return model
数据整理
seed = 7
# 设定随机数种子
np.random.seed(seed)
# 导入数据
dataset = pd.read_csv('XXX/diabetes.csv')
# 分割输入x和输出Y
x = dataset.iloc[:,0:8]
Y = dataset.iloc[:,8]
# tensorflow仅接受训练数据为numpy形式
x = np.array(x)
#对Y进行标签编码
Y=Y.values.tolist()
label_encoder = LabelEncoder()
Y = np.array(label_encoder.fit_transform(Y))
使用KerasClassifier的build_fn参数调用模型,设置epochs和batch_size。参数将自动绑定并传递给KerasClassifier类内部调用fit()函数。
#创建模型 for scikit-learn
model = KerasClassifier(build_fn=create_model, epochs=150, batch_size=10, verbose=0)
使用k折交叉验证,并运用scikit-learn中的cross_val_score评估深度学习模型并输出结果
# 10折交叉验证
kfold = StratifiedKFold(n_splits=10, shuffle=True, random_state=seed)
results = cross_val_score(model, x, Y, cv=kfold)
print(results.mean())
最后得到10折交叉验证的准确度均值
此外,还可以通过GridSearchCV的方法进行调参操作
from keras.models import Sequential
from keras.layers import Dense
import numpy as np
from sklearn.model_selection import GridSearchCV
from keras.wrappers.scikit_learn import KerasClassifier
# 构建模型,设定optimizer和init的默认值,并将其作为调参对象
def create_model(optimizer='adam', init='glorot_uniform'):
# 构建模型,其中kernel_initializer为初始化w权重
model = Sequential()
model.add(Dense(units=12, kernel_initializer=init, input_dim=8, activation='relu'))
model.add(Dense(units=8, kernel_initializer=init, activation='relu'))
model.add(Dense(units=1, kernel_initializer=init, activation='sigmoid'))
# 编译模型
model.compile(loss='binary_crossentropy', optimizer=optimizer, metrics=['accuracy'])
return model
seed = 7
# 设定随机数种子
np.random.seed(seed)
# 导入数据
dataset = pd.read_csv('XXXX/diabetes.csv')
# 分割输入x和输出Y
x = dataset.iloc[:,0:8]
Y = dataset.iloc[:,8]
# tensorflow仅接受训练数据为numpy形式
x = np.array(x)
#对Y进行标签编码
Y=Y.values.tolist()
label_encoder = LabelEncoder()
Y = np.array(label_encoder.fit_transform(Y))
#创建模型 for scikit-learn
model = KerasClassifier(build_fn=create_model, verbose=0)
# 构建需要调参的参数
param_grid = {}
param_grid['optimizer'] = ['rmsprop', 'adam']
param_grid['init'] = ['glorot_uniform', 'normal', 'uniform']
param_grid['epochs'] = [50, 100, 150, 200]
param_grid['batch_size'] = [5, 10, 20]
# 调参
grid = GridSearchCV(estimator=model, param_grid=param_grid)
results = grid.fit(x, Y)
# 输出结果
print('Best: %f using %s' % (results.best_score_, results.best_params_))
means = results.cv_results_['mean_test_score']
stds = results.cv_results_['std_test_score']
params = results.cv_results_['params']
for mean, std, param in zip(means, stds, params):
print('%f (%f) with: %r' % (mean, std, param))
至此,可以选择最优参数组合进行测试集模型预测
该文代码源自魏贞原《深度学习:基于Keras的Python实践》