题目:
题意:
1.用sklearn的datasets中的make_classification函数创建一个数据集
2.用交叉验证分离数据集为10份,用cross_validation的KFolds函数;
3.分别使用三种训练算法:朴素贝叶斯、支持向量机和随机森林;
4.分别使用三种指标评估交叉验证的性能:精确率、F1值、AUC ROC曲线。
代码:
from sklearn import datasets
from sklearn import cross_validation
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier
from sklearn import metrics
dataset = datasets.make_classification(n_samples=1000, n_features=10)
cv = cross_validation.KFold(len(dataset[0]), n_folds=10, shuffle=True)
for train_index, test_index in cv:
X_train, y_train = dataset[0][train_index], dataset[1][train_index]
X_test, y_test = dataset[0][test_index], dataset[1][test_index]
print('Gaussian NB')#朴素贝叶斯
clf = GaussianNB()
clf.fit(X_train, y_train)
pred = clf.predict(X_test)
print(metrics.accuracy_score(y_test, pred))#使用精确率评估性能
print(metrics.f1_score(y_test, pred))#使用F1值评估性能
print(metrics.roc_auc_score(y_test, pred))#使用AUC ROC曲线评估性能
print('SVC')# 支持向量机
clf = SVC(C=1e-01, kernel='rbf', gamma=0.1)
clf.fit(X_train, y_train)
pred = clf.predict(X_test)
print(metrics.accuracy_score(y_test, pred))
print(metrics.f1_score(y_test, pred))
print(metrics.roc_auc_score(y_test, pred))
print('RandomForestClassifier')# 随机森林
clf = RandomForestClassifier(n_estimators=10)
clf.fit(X_train, y_train)
pred = clf.predict(X_test)
print(metrics.accuracy_score(y_test, pred))
print(metrics.f1_score(y_test, pred))
print(metrics.roc_auc_score(y_test, pred))
运行结果:
Gaussian NB
0.92
0.9272727272727272
0.9241126070991433
SVC0.96
0.9642857142857142
0.9620563035495715
RandomForestClassifier0.98
0.9821428571428572
0.9824561403508771