我尝试在Boston数据集上使用随机森林算法,借助sklearn的RandomForestRegressor来预测房价。在
以下是我的列车/测试数据分割:'''Train Test Split of Data'''
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 1)
Dimensions of Train/Test split
X.shape: (489, 11)
X_train.shape: (366, 11)
X_test.shape: (123, 11)
以下是我的随机森林模型:
^{pr2}$
为了评估模型的性能,我用下面的代码尝试了sklearn的learning curvetrain_sizes = [1, 25, 50, 100, 200, 390] # 390 is 80% of shape(X)
from sklearn.model_selection import learning_curve
def learning_curves(estimator, X, y, train_sizes, cv):
train_sizes, train_scores, validation_scores = learning_curve(
estimator, X, y, train_sizes = train_sizes,
cv = cv, scoring = 'neg_mean_squared_error')
#print('Training scores:\n\n', train_scores)
#print('\n', '-' * 70) # separator to make the output easy to read
#print('\nValidation scores:\n\n', validation_scores)
train_scores_mean = -train_scores.mean(axis = 1)
print(train_scores_mean)
validation_scores_mean = -validation_scores.mean(axis = 1)
print(validation_scores_mean)
plt.plot(train_sizes, train_scores_mean, label = 'Training error')
plt.plot(train_sizes, validation_scores_mean, label = 'Validation error')
plt.ylabel('MSE', fontsize = 14)
plt.xlabel('Training set size', fontsize = 14)
title = 'Learning curves for a ' + str(estimator).split('(')[0] + ' model'
plt.title(title, fontsize = 18, y = 1.03)
plt.legend()
plt.ylim(0,40)
如果您注意到我已经将X, y传递给了X_train, y_train,而不是{}。在
我有以下关于learning_curve的问题我只是不明白传递整个数据集而不是只有train subset是否正确
例如,测试数据集的大小是否根据列表train_sizes中提到的列车数据集的大小而变化,还是总是固定的(在我的例子中,根据123个样本的列车/测试分割,这将是25%)当train dataset size = 1时,测试数据大小是488还是123(X\u测试的大小)
当train dataset size = 25时,测试数据大小是464还是123(X_test的大小)
当train dataset size = 50时,测试数据大小是439还是123(X\u测试的大小)
{cdam>测试函数中的位大小