风火编程--机器学习之集成学习

集成学习

Voting

描述
综合多个模型的预测结果进行投票作为最终的预测结果
VotingClassifier
VotingRegressor

接口

from sklearn import datasets
from sklearn.ensemble import VotingClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.svm import SVC

X,y = datasets.make_moons(n_samples=500, noise=0.3, random_state=123)
X_train, X_test, y_train, y_test = train_test_split(X, y)
voting_clf = VotingClassifier(
    estimators=[
        ('log_clf', LogisticRegression()),
        ('svm_clf', SVC(probability=True))],
    voting='soft')
voting_clf.fit(X_train, y_train)
score = voting_clf.score(X_test, y_test)
print(score)

Bagging

描述
有放回的取样策略会有37%左右的数据out of bag, 可以用来作为测试集.
因此不需要进行train_test_split
BaggingClassifier
BaggingRegressor
接口

from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import BaggingClassifier
X,y = datasets.make_moons(n_samples=500, noise=0.3, random_state=123)
X_train, X_test, y_train, y_test = train_test_split(X, y)
bagging = BaggingClassifier(
    DecisionTreeClassifier(max_leaf_nodes=16),  # 使用决策树模型,最多叶子节点16个
    n_estimators=300,  # 创建500个分类器
    max_samples=100,  # 每次随机选取100个样本
    max_features=2,  # 每次随机选取2个特征
    bootstrap=True,  # 数据有放回取样
    bootstrap_features=True,  # 特征有放回取样
    # oob_score=True,  # out-of-bag数据进行测试准确率
    n_jobs=1)  # 使用所有CPU内核并行处理
bagging.fit(X_train, y_train)
score = bagging.score(X_test, y_test)
print(score)

增强学习 Bosting

AdaBosting
增大预测错误的样本的权值继续预测
AdaBostingClassifier
AdaboostingRegressor
接口

from sklearn import datasets
from sklearn.ensemble import AdaBoostClassifier
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier

X,y = datasets.make_moons(n_samples=500, noise=0.3, random_state=123)
X_train, X_test, y_train, y_test = train_test_split(X, y)
adaboost_clf = AdaBoostClassifier(
    DecisionTreeClassifier(max_depth=2), n_estimators=100)
adaboost_clf.fit(X_train, y_train)
score = adaboost_clf.score(X_test, y_test)
print(score)

GradientBoosting
针对每次预测错误的样本训练下一个模型.
GradientBoostingClassifier
GradientRegressor
接口

from sklearn import datasets
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.model_selection import train_test_split

X,y = datasets.make_moons(n_samples=500, noise=0.3, random_state=123)
X_train, X_test, y_train, y_test = train_test_split(X, y)
adaboost_clf = GradientBoostingClassifier(max_depth=2, n_estimators=30)
adaboost_clf.fit(X_train, y_train)
score = adaboost_clf.score(X_test, y_test)
print(score)

Stacking

用机器学习模型对上层预测结果再次进行预测.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值