十五周作业 sklearn

from sklearn import metrics
from sklearn import datasets
from sklearn import cross_validation
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier
 
dataset = datasets.make_classification(n_samples=1000, n_features=10)  
kf = cross_validation.KFold(1000, n_folds=10, shuffle=True)
for train_index, test_index in kf:
    X_train, y_train = dataset[0][train_index], dataset[1][train_index]
    X_test, y_test = dataset[0][test_index], dataset[1][test_index]
GaussianNB_clf = GaussianNB()
GaussianNB_clf.fit(X_train, y_train)
GaussianNB_pred = GaussianNB_clf.predict(X_test)
SVC_clf = SVC(C=1e-01, kernel='rbf', gamma=0.1)
SVC_clf.fit(X_train, y_train)
SVC_pred = SVC_clf.predict(X_test)  
Random_Forest_clf = RandomForestClassifier(n_estimators=6)
Random_Forest_clf.fit(X_train, y_train)
Random_Forest_pred = Random_Forest_clf.predict(X_test)
GaussianNB_accuracy_score = metrics.accuracy_score(y_test, GaussianNB_pred)
GaussianNB_f1_score = metrics.f1_score(y_test, GaussianNB_pred)
GaussianNB_roc_auc_score = metrics.roc_auc_score(y_test, GaussianNB_pred)
print("  GaussianNB_accuracy_score: ", GaussianNB_accuracy_score)
print("  GaussianNB_f1_score: ", GaussianNB_f1_score)
print("  GaussianNB_roc_auc_score: ", GaussianNB_roc_auc_score)
SVC_accuracy_score = metrics.accuracy_score(y_test, SVC_pred)
SVC_f1_score = metrics.f1_score(y_test, SVC_pred)
SVC_roc_auc_score = metrics.roc_auc_score(y_test, SVC_pred)
print("\n  SVC_accuracy_score: ", SVC_accuracy_score)
print("  SVC_f1_score: ", SVC_f1_score)
print("  SVC_roc_auc_score: ", SVC_roc_auc_score)
Random_Forest_accuracy_score = metrics.accuracy_score(y_test, Random_Forest_pred)
Random_Forest_f1_score = metrics.f1_score(y_test, Random_Forest_pred)
Random_Forest_roc_auc_score = metrics.roc_auc_score(y_test, Random_Forest_pred)
print("\n  Random_Forest_accuracy_score: ", Random_Forest_accuracy_score)
print("  Random_Forest_f1_score: ", Random_Forest_f1_score)
print("  Random_Forest_roc_auc_score: ", Random_Forest_roc_auc_score)

十折交叉验证是一种常用的交叉验证方法,可以在机器学习中使用。在sklearn中,可以使用KFold来实现十折交叉验证。KFold函数可以将数据集分成k个互斥的子集,其中每个子集都可以作为一次验证集,其余的k-1个子集作为训练集。 在sklearn的KFold函数中,参数n_splits表示将数据集分成几折,而shuffle参数默认为False,表示不进行洗牌。当shuffle参数为True时,每次划分样本都会进行洗牌,以增加模型的泛化能力。 使用KFold函数可以返回一个迭代器,该迭代器可以生成每个训练集和验证集的索引。通过遍历迭代器,可以获取每次训练集和验证集的索引,并在每次迭代中使用这些索引来训练和评估模型。 总结来说,十折交叉验证是一种常用的机器学习方法,可以通过使用sklearn中的KFold函数来实现。该函数将数据集分成k个互斥的子集,其中k-1个子集作为训练集,剩余的1个子集作为验证集。可以通过迭代器来获取每次训练集和验证集的索引,并使用这些索引来训练和评估模型。<span class="em">1</span><span class="em">2</span><span class="em">3</span> #### 引用[.reference_title] - *1* *2* *3* [【机器学习】(18)使用sklearn实现交叉验证](https://blog.csdn.net/m0_47256162/article/details/117636403)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v93^chatsearchT3_1"}}] [.reference_item style="max-width: 100%"] [ .reference_list ]
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值