2018.6.13 python作业:sklearn exercise

作业:

In the second ML assignment you have to compare the performance of three di↵erent classification algorithms, namely Naive Bayes, SVM, and Random Forest. For this assignment you need to generate a random binary classification problem, and then train and test (using 10-fold cross validation) the three algorithms. For some algorithms inner cross validation (5-fold) for choosing the parameters is needed. Then, show the classification performace (per-fold and averaged) in the report, and briefly discussing the results.

Steps:

1. Create a classification dataset (n samples 1000, n features 10) 
2. Split the dataset using 10-fold cross validation 
3. Train the algorithms 

  • GaussianNB 
  • SVC (possible C values [1e-02, 1e-01, 1e00, 1e01, 1e02], RBF kernel) 
  • RandomForestClassifier (possible n estimators values [10, 100, 1000]) 

4. Evaluate the cross-validated performance

  • Accuracy
  • F1-score
  • AUC ROC 

5. Write a short report summarizing the methodology and the results


from sklearn import datasets
from sklearn import cross_validation
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier
from sklearn import metrics
import numpy as np

iris = datasets.load_iris()
dataset = datasets.make_classification(n_samples = 1000 , n_features= 10,
            n_informative= 2, n_redundant = 2, n_repeated = 0, n_classes = 2)

kf = cross_validation.KFold(len(iris.data) , n_folds  = 10, shuffle = True)

acc = []
f1 = []
auc = []

for train_index, test_index in kf:
    X_train, X_test = dataset[0][train_index],dataset[0][test_index]
    y_train, y_test = dataset[1][train_index],dataset[1][test_index]

# Naive Bayes 
clf = GaussianNB() 
clf.fit(X_train, y_train) 
pred = clf.predict(X_test)
print(pred)
print(y_test)
print('\n')

acc.append(metrics.accuracy_score(y_test,pred))
f1.append(metrics.f1_score(y_test,pred))
auc.append(metrics.roc_auc_score(y_test, pred))

#SVM 
for c in [1e-02, 1e-01, 1e00, 1e01, 1e02]:
	clf = SVC(C=c, kernel='rbf', gamma=0.1) 
	clf.fit(X_train, y_train) 
	pred = clf.predict(X_test)
	print(pred)
	print(y_test)
	print('\n')

	acc.append(metrics.accuracy_score(y_test,pred))
	f1.append(metrics.f1_score(y_test,pred))
	auc.append(metrics.roc_auc_score(y_test, pred))

# Random Forest 
for n in [10, 100, 1000]:
	clf = RandomForestClassifier(n_estimators=10) 
	clf.fit(X_train, y_train) 
	pred = clf.predict(X_test)
	print(pred)
	print(y_test)
	print('\n')

	acc.append(metrics.accuracy_score(y_test,pred))
	f1.append(metrics.f1_score(y_test,pred))
	auc.append(metrics.roc_auc_score(y_test, pred))

# Evaluate the cross-validated performance
print(acc)
print(f1)
print(auc)

阅读更多
想对作者说点什么?

博主推荐

换一批

没有更多推荐了,返回首页