Python编程作业【第十五周】(sklearn homework)

Sklearn

  1. Create a classification dataset (n samples 1000, n features 10)
  2. Split the dataset using 10-fold cross validation
  3. Train the algorithms
  4. Evaluate the cross-validated performance I Accuracy
  5. Write a short report summarizing the methodology and the results
#1
from sklearn import datasets
dataset = datasets.make_classification(n_samples=1000, n_features=10,
            n_informative=2, n_redundant=2, n_repeated=0, n_classes=2)
#2
from sklearn import cross_validation
kf = cross_validation.KFold(len(iris.data), n_folds=10, shuffle=True)
for train_index, test_index in kf:
    X_train, y_train = iris.data[train_index], iris.target[train_index]
    X_test, y_test   = iris.data[test_index],  iris.target[test_index]
#3
# Naive Bayes
from sklearn.naive_bayes import GaussianNB
clf = GaussianNB()
clf.fit(X_train, y_train)
pred = clf.predict(X_test)

#SVM
from sklearn.svm import SVC
clf = SVC(C=1e-01, kernel='rbf', gamma=0.1)
clf.fit(X_train, y_train)
pred = clf.predict(X_test)

#random Forest
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(n_estimators=6)
clf.fit(X_train, y_train)
pred = clf.predict(X_test)
#4
from sklearn import metrics
acc = metrics.accuracy_score(y_test, pred)
print acc
f1 = metrics.f1_score(y_test, pred)
print f1
auc = metrics.roc_auc_score(y_test, pred)
print auc
#5
#随机森林算法、高斯朴素贝叶斯算法、SVM算法都是设立了训练集和检验集,通过检验集的检验来查看训练集的效果如何。经过反复的实验,随机森林算法的性能要高于高斯朴素贝叶斯和SVM算法。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值