一、随机森林简介
随机森林是属于集成学习,其核心思想就是集成多个弱分类器以达到三个臭皮匠赛过诸葛亮的效果。随机森林采用Bagging的思想,所谓的Bagging就是:
(1)每次有放回地从训练集中取出 n 个训练样本,组成新的训练集;
(2)利用新的训练集,训练得到M个子模型;
(3)对于分类问题,采用投票的方法,得票最多子模型的分类类别为最终的类别;对于回归问题,采用简单的平均方法得到预测值。
二、基于泰坦尼克数据进行随机森林构建
代码如下(示例):
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split,GridSearchCV
from sklearn.feature_extraction import DictVectorizer
from sklearn.ensemble import RandomForestClassifier
#读取数据
data = pd.read_csv('E://titanic//train.csv')
print(type(data))
print(data.columns)
data['Pclass'] = data['Pclass'].astype(str) + 'st'
print(data.head())
#提取特征值和目标值
x = data[['Pclass','Age','Sex']]
y = data['Survived']
#替换age的nan
x['Age'].fillna(value=data['Age'].mean(),inplace=True)
#分割训练集和测试机
x_train,x_test,y_train,y_test = train_test_split(x,y,random_state=22,test_size=0.2)
x_train = x_train.to_dict(orient='records')
x_test = x_test.to_dict(orient='records')
print(x_train)
print(x_test)
#将类别型数据转化为one-hot
transfer = DictVectorizer(sparse=False)
x_train = transfer.fit_transform(x_train)
print(x_train)
print(transfer.get_feature_names())
x_test = transfer.fit_transform(x_test)
estimator = RandomForestClassifier()
param_grid = {'n_estimators':[100,200,300,500,800,1200],'max_depth':[5,8,15,25,30]}
estimator = GridSearchCV(estimator,param_grid=param_grid,cv=5)
estimator.fit(x_train,y_train)
pre = estimator.predict(x_test)
print(pre)
print(estimator.score(x_test,y_test))