In the second ML assignment you have to compare the performance ofthree di↵erent classification algorithms, namely Naive Bayes, SVM, andRandom Forest.
For this assignment you need to generate a random binary classificationproblem, and then train and test (using 10-fold cross validation) the threealgorithms. For some algorithms inner cross validation (5-fold) for choosingthe parameters is needed. Then, show the classification performace(per-fold and averaged) in the report, and briefly discussing the results.
![](https://i-blog.csdnimg.cn/blog_migrate/a5c596d9fb9980f2c5a83977e2be045a.png)
from sklearn import datasets
from sklearn import cross_validation
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier
from sklearn import metrics
#step 1
dataset = datasets.make_classification(n_samples=1000, n_features=10)
#step 2
kf = cross_validation.KFold(len(dataset[0]),n_folds=10,shuffle=True)
for train_index,test_index in kf:
x_train,y_train = dataset[0][train_index],dataset[1][train_index]
x_test,y_test = dataset[0][test_index],dataset[1][test_index]
#step 3
#Gaussian NB
GNBclf = GaussianNB()
GNBclf.fit(x_train,y_train)
GNBpred = GNBclf.predict(x_test)
#SVC
SVCclf = SVC(C=1e-01, kernel='rbf', gamma=0.1)
SVCclf.fit(x_train, y_train)
SVCpred = SVCclf.predict(x_test)
#RFC
RFCclf = RandomForestClassifier(n_estimators=10)
RFCclf.fit(x_train, y_train)
RFCpred = RFCclf.predict(x_test)
#step 4
#Gaussian NB
GNBacc = metrics.accuracy_score(y_test, GNBpred)
GNBf1 = metrics.f1_score(y_test, GNBpred)
GNBauc = metrics.roc_auc_score(y_test, GNBpred)
#SVC
SVCacc = metrics.accuracy_score(y_test, SVCpred)
SVCf1 = metrics.f1_score(y_test, SVCpred)
SVCauc = metrics.roc_auc_score(y_test, SVCpred)
#RFC
RFCacc = metrics.accuracy_score(y_test, RFCpred)
RFCf1 = metrics.f1_score(y_test, RFCpred)
RFCauc = metrics.roc_auc_score(y_test, RFCpred)
print('GaussianNB:'+' Accuracy: '+str(GNBacc)+' F1-score: '+str(GNBf1)+' AUC ROC: '+str(GNBauc))
print('SVC:'+' Accuracy: '+str(SVCacc)+' F1-score: '+str(SVCf1)+' AUC ROC: '+str(SVCauc))
print('RFC:'+' Accuracy: '+str(RFCacc)+' F1-score: '+str(RFCf1)+' AUC ROC: '+str(RFCauc))
![](https://i-blog.csdnimg.cn/blog_migrate/dfbebc2e2cc0795a01623f68cfb2501d.png)
会有提示警告信息:
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/sklearn/cross_validation.py:41: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. Also note that the interface of the new CV iterators are different from that of this module. This module will be removed in 0.20.
"This module will be removed in 0.20.", DeprecationWarning)
因为cross_validation模块被弃用了,但是不会影响显示结果,只是warning。
从结果可以看出随机森林模型训练效果较好。