机器学习基础算法之K近邻,朴素贝叶斯,决策树与随机森林

版权声明:本文为转载文章, 博客原址链接:https://blog.csdn.net/jingyoushui/article/details/99656150


机器学习库使用scikit-learn,封装了很多机器学习的算法,还有很多数据集,对于初学者来说是一个非常好的库。

1.scikit-learn数据集API

sklearn.datasets 加载获取流行数据集 datasets.load_(),获取小规模数据集,数据包含在datasets里。
datasets.fetch_
(data_home=None) 获取大规模数据集,需要从网络上下载,函数的第一个参数是data_home,表示数据集下载的目录,默认是 ~/scikit_learn_data/

2.获取数据集的返回类型

load和fetch返回的数据类型datasets.base.Bunch(字典格式)

属性描述
data特征数据数组,是 [n_samples * n_features] 的二维 numpy.ndarray 数组
target标签数组,是 n_samples 的一维 numpy.ndarray 数组
DESCR数据描述
feature_names特征名,新闻数据,手写数字、回归数据集没有
target_names标签名,回归数据集没有

3.数据集分隔

方法:sklearn.model_selection.train_test_split(arrays, *options)
x: 数据集的特征值, y :数据集的标签值 ,test_size :测试集的大小,一般为float random_state 随机数种子,不同的种子会造成不同的随机 采样结果。相同的种子采样结果相同。
return :训练集特征值,测试集特征值,训练标签,测试标签 (默认随机取)

大数据集分割:
sklearn.datasets.fetch_20newsgroups(data_home=None,subset=‘train’)
subset: ‘train’或者’test’,‘all’,可选,选择要加载的数据集.

datasets.clear_data_home(data_home=None),清除目录下的数据

示例:
导入datasets中的load_iris数据集,并按照测试集为25%进行分割。

from sklearn.datasets import load_iris
from sklearn.model_selection import train_test_split

li = load_iris()
x_train,x_test,y_train,y_test = train_test_split(li.data,li.target,test_size=0.25)
print(x_train,y_train)
print(x_test,y_test)

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

4.sklearn机器学习算法的实现-估计器

在sklearn中,估计器(estimator)是一个重要的角色,分类器和回归器都属于estimator,是一类实现了算法的API

1、用于分类的估计器:
sklearn.neighbors k-近邻算法
sklearn.naive_bayes 贝叶斯
sklearn.linear_model.LogisticRegression 逻辑回归

2、用于回归的估计器:
sklearn.linear_model.LinearRegression 线性回归
sklearn.linear_model.Ridge 岭回归

5.K近邻

定义:如果一个样本在特征空间中的k个最相似(即特征空间中最邻近)的样本中的大多数属于某一个类别,则该样本也属于这个类别。

sklearn.neighbors.KNeighborsClassifier(n_neighbors=5,algorithm=‘auto’)

n_neighbors:int,可选(默认= 5),k_neighbors查询默认使用的邻居数

algorithm:{‘auto’,‘ball_tree’,‘kd_tree’,‘brute’},可选用于计算最近邻居的算法:‘ball_tree’将会使用 BallTree,‘kd_tree’将使用 KDTree。‘auto’将尝试根据传递给fit方法的值来决定最合适的算法。 (不同实现方式影响效率)

数据的处理:
1、缩小数据集范围 DataFrame.query()
2、处理日期数据 pd.to_datetime pd.DatetimeIndex
3、增加分割的日期数据
4、删除没用的日期数据 pd.drop
5、将签到位置少于n个用户的删除
例如:
place_count =data.groupby(‘place_id’).aggregate(np.count_nonzero)
tf = place_count[place_count.row_id > 3].reset_index()
data = data[data[‘place_id’].isin(tf.place_id)]

K-近邻算法的优缺点:
优点: 简单,易于理解,易于实现,无需估计参数,无需训练
缺点: 懒惰算法,对测试样本分类时的计算量大,内存开销大 必须指定K值,K值选择不当则分类精度不能保证
使用场景:小数据场景,几千~几万样本,具体场景具体业务去测试
示例:

import pandas as pd
from sklearn.datasets import load_iris, fetch_20newsgroups, load_boston
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.neighbors import KNeighborsClassifier
from sklearn.preprocessing import StandardScaler

def knncls():
“”"
K-近邻预测用户签到位置
:return:None
“”"

# 读取数据
data = pd.read_csv("./data/FBlocation/train.csv")

<span class="token comment"># print(data.head(10))</span>

<span class="token comment"># 处理数据</span>
<span class="token comment"># 1、缩小数据,查询数据筛选</span>
data <span class="token operator">=</span> data<span class="token punctuation">.</span>query<span class="token punctuation">(</span><span class="token string">"x &gt; 1.0 &amp;  x &lt; 1.25 &amp; y &gt; 2.5 &amp; y &lt; 2.75"</span><span class="token punctuation">)</span>

<span class="token comment"># 处理时间的数据</span>
time_value <span class="token operator">=</span> pd<span class="token punctuation">.</span>to_datetime<span class="token punctuation">(</span>data<span class="token punctuation">[</span><span class="token string">'time'</span><span class="token punctuation">]</span><span class="token punctuation">,</span> unit<span class="token operator">=</span><span class="token string">'s'</span><span class="token punctuation">)</span>

<span class="token keyword">print</span><span class="token punctuation">(</span>time_value<span class="token punctuation">)</span>

<span class="token comment"># 把日期格式转换成 字典格式</span>
time_value <span class="token operator">=</span> pd<span class="token punctuation">.</span>DatetimeIndex<span class="token punctuation">(</span>time_value<span class="token punctuation">)</span>

<span class="token comment"># 构造一些特征,建议使用loc</span>
data<span class="token punctuation">[</span><span class="token string">'day'</span><span class="token punctuation">]</span> <span class="token operator">=</span> time_value<span class="token punctuation">.</span>day
data<span class="token punctuation">[</span><span class="token string">'hour'</span><span class="token punctuation">]</span> <span class="token operator">=</span> time_value<span class="token punctuation">.</span>hour
data<span class="token punctuation">[</span><span class="token string">'weekday'</span><span class="token punctuation">]</span> <span class="token operator">=</span> time_value<span class="token punctuation">.</span>weekday

<span class="token comment"># 把时间戳特征删除</span>
data <span class="token operator">=</span> data<span class="token punctuation">.</span>drop<span class="token punctuation">(</span><span class="token punctuation">[</span><span class="token string">'time'</span><span class="token punctuation">]</span><span class="token punctuation">,</span> axis<span class="token operator">=</span><span class="token number">1</span><span class="token punctuation">)</span>

<span class="token keyword">print</span><span class="token punctuation">(</span>data<span class="token punctuation">)</span>

<span class="token comment"># 把签到数量少于n个目标位置删除</span>
place_count <span class="token operator">=</span> data<span class="token punctuation">.</span>groupby<span class="token punctuation">(</span><span class="token string">'place_id'</span><span class="token punctuation">)</span><span class="token punctuation">.</span>count<span class="token punctuation">(</span><span class="token punctuation">)</span>
<span class="token comment">#reset_index()是将索引变成一个特征,新索引是0,1,2,3...</span>
tf <span class="token operator">=</span> place_count<span class="token punctuation">[</span>place_count<span class="token punctuation">.</span>row_id <span class="token operator">&gt;</span> <span class="token number">3</span><span class="token punctuation">]</span><span class="token punctuation">.</span>reset_index<span class="token punctuation">(</span><span class="token punctuation">)</span>
<span class="token comment">#筛选,data['place_id']在(tf.place_id)中的取出来</span>
data <span class="token operator">=</span> data<span class="token punctuation">[</span>data<span class="token punctuation">[</span><span class="token string">'place_id'</span><span class="token punctuation">]</span><span class="token punctuation">.</span>isin<span class="token punctuation">(</span>tf<span class="token punctuation">.</span>place_id<span class="token punctuation">)</span><span class="token punctuation">]</span>

<span class="token comment"># 取出数据当中的特征值和目标值</span>
y <span class="token operator">=</span> data<span class="token punctuation">[</span><span class="token string">'place_id'</span><span class="token punctuation">]</span>

x <span class="token operator">=</span> data<span class="token punctuation">.</span>drop<span class="token punctuation">(</span><span class="token punctuation">[</span><span class="token string">'place_id'</span><span class="token punctuation">]</span><span class="token punctuation">,</span> axis<span class="token operator">=</span><span class="token number">1</span><span class="token punctuation">)</span>

<span class="token comment"># 进行数据的分割训练集合测试集</span>
x_train<span class="token punctuation">,</span> x_test<span class="token punctuation">,</span> y_train<span class="token punctuation">,</span> y_test <span class="token operator">=</span> train_test_split<span class="token punctuation">(</span>x<span class="token punctuation">,</span> y<span class="token punctuation">,</span> test_size<span class="token operator">=</span><span class="token number">0.25</span><span class="token punctuation">)</span>

<span class="token comment"># 特征工程(标准化)</span>
std <span class="token operator">=</span> StandardScaler<span class="token punctuation">(</span><span class="token punctuation">)</span>

<span class="token comment"># 对测试集和训练集的特征值进行标准化</span>
x_train <span class="token operator">=</span> std<span class="token punctuation">.</span>fit_transform<span class="token punctuation">(</span>x_train<span class="token punctuation">)</span>

x_test <span class="token operator">=</span> std<span class="token punctuation">.</span>transform<span class="token punctuation">(</span>x_test<span class="token punctuation">)</span>

<span class="token comment"># 进行算法流程 # 超参数</span>
knn <span class="token operator">=</span> KNeighborsClassifier<span class="token punctuation">(</span><span class="token punctuation">)</span>

<span class="token comment"># # fit, predict,score</span>
<span class="token comment"># knn.fit(x_train, y_train)</span>
<span class="token comment">#</span>
<span class="token comment"># # 得出预测结果</span>
<span class="token comment"># y_predict = knn.predict(x_test)</span>
<span class="token comment">#</span>
<span class="token comment"># print("预测的目标签到位置为:", y_predict)</span>
<span class="token comment">#</span>
<span class="token comment"># # 得出准确率</span>
<span class="token comment"># print("预测的准确率:", knn.score(x_test, y_test))</span>

<span class="token comment"># 构造一些参数的值进行搜索</span>
param <span class="token operator">=</span> <span class="token punctuation">{</span><span class="token string">"n_neighbors"</span><span class="token punctuation">:</span> <span class="token punctuation">[</span><span class="token number">3</span><span class="token punctuation">,</span> <span class="token number">5</span><span class="token punctuation">,</span> <span class="token number">10</span><span class="token punctuation">]</span><span class="token punctuation">}</span>

<span class="token comment"># 进行网格搜索</span>
gc <span class="token operator">=</span> GridSearchCV<span class="token punctuation">(</span>knn<span class="token punctuation">,</span> param_grid<span class="token operator">=</span>param<span class="token punctuation">,</span> cv<span class="token operator">=</span><span class="token number">2</span><span class="token punctuation">)</span>
gc<span class="token punctuation">.</span>fit<span class="token punctuation">(</span>x_train<span class="token punctuation">,</span> y_train<span class="token punctuation">)</span>

<span class="token comment"># 预测准确率</span>
<span class="token keyword">print</span><span class="token punctuation">(</span><span class="token string">"在测试集上准确率:"</span><span class="token punctuation">,</span> gc<span class="token punctuation">.</span>score<span class="token punctuation">(</span>x_test<span class="token punctuation">,</span> y_test<span class="token punctuation">)</span><span class="token punctuation">)</span>
<span class="token keyword">print</span><span class="token punctuation">(</span><span class="token string">"在交叉验证当中最好的结果:"</span><span class="token punctuation">,</span> gc<span class="token punctuation">.</span>best_score_<span class="token punctuation">)</span>
<span class="token keyword">print</span><span class="token punctuation">(</span><span class="token string">"选择最好的模型是:"</span><span class="token punctuation">,</span> gc<span class="token punctuation">.</span>best_estimator_<span class="token punctuation">)</span>
<span class="token keyword">print</span><span class="token punctuation">(</span><span class="token string">"每个超参数每次交叉验证的结果:"</span><span class="token punctuation">,</span> gc<span class="token punctuation">.</span>cv_results_<span class="token punctuation">)</span>

<span class="token keyword">return</span> <span class="token boolean">None</span>

if name == main:
knncls()

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92

6.朴素贝叶斯

贝叶斯公式:
P ( C ∣ W ) = P ( W ∣ C ) P ( C ) P ( W ) P ( C ∣ W ) = P ( W ∣ C ) P ( C ) P ( W ) P ( C ∣ W ) = P ( W ∣ C ) P ( C ) P ( W ) P(C∣W)=P(W∣C)P(C)P(W)P(C∣W)=P(W∣C)P(C)P(W) P(C|W)=\frac{P(W|C)P(C)}{P(W)} P(CW)=P(WC)P(C)P(W)P(CW)=P(WC)P(C)P(W)P(CW)=P(W)P(WC)P(C)P(CW)=P(W)P(WC)P(C)
API:sklearn.naive_bayes.MultinomialNB

sklearn.naive_bayes.MultinomialNB(alpha = 1.0) 朴素贝叶斯分类
alpha:拉普拉斯平滑系数
朴素贝叶斯分类优缺点:
优点:

  • 朴素贝叶斯模型发源于古典数学理论,有稳定的分类效率。
  • 对缺失数据不太敏感,算法也比较简单,常用于文本分类。
  • 分类准确度高,速度快

缺点:
需要知道先验概率P(F1,F2,…|C),因此在某些时候会由于假设的先验模型的原因导致预测效果不佳。

示例:

from sklearn.datasets import load_iris, fetch_20newsgroups, load_boston
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.preprocessing import StandardScaler
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.naive_bayes import MultinomialNB
from sklearn.metrics import classification_report

import pandas as pd

def naviebayes():
“”"
朴素贝叶斯进行文本分类
:return: None
“”"

news = fetch_20newsgroups(subset=‘all’)

<span class="token comment"># 进行数据分割</span>
x_train<span class="token punctuation">,</span> x_test<span class="token punctuation">,</span> y_train<span class="token punctuation">,</span> y_test <span class="token operator">=</span> train_test_split<span class="token punctuation">(</span>news<span class="token punctuation">.</span>data<span class="token punctuation">,</span> news<span class="token punctuation">.</span>target<span class="token punctuation">,</span> test_size<span class="token operator">=</span><span class="token number">0.25</span><span class="token punctuation">)</span>

<span class="token comment"># 对数据集进行特征抽取</span>
tf <span class="token operator">=</span> TfidfVectorizer<span class="token punctuation">(</span><span class="token punctuation">)</span>

<span class="token comment"># 以训练集当中的词的列表进行每篇文章重要性统计['a','b','c','d']</span>
x_train <span class="token operator">=</span> tf<span class="token punctuation">.</span>fit_transform<span class="token punctuation">(</span>x_train<span class="token punctuation">)</span>

# print(tf.get_feature_names())
x_test = tf.transform(x_test)

<span class="token comment"># 进行朴素贝叶斯算法的预测</span>
mlt <span class="token operator">=</span> MultinomialNB<span class="token punctuation">(</span>alpha<span class="token operator">=</span><span class="token number">1.0</span><span class="token punctuation">)</span>
mlt<span class="token punctuation">.</span>fit<span class="token punctuation">(</span>x_train<span class="token punctuation">,</span> y_train<span class="token punctuation">)</span>
y_predict <span class="token operator">=</span> mlt<span class="token punctuation">.</span>predict<span class="token punctuation">(</span>x_test<span class="token punctuation">)</span>

<span class="token keyword">print</span><span class="token punctuation">(</span><span class="token string">"预测的文章类别为:"</span><span class="token punctuation">,</span> y_predict<span class="token punctuation">)</span>
<span class="token comment"># 得出准确率</span>
<span class="token keyword">print</span><span class="token punctuation">(</span><span class="token string">"准确率为:"</span><span class="token punctuation">,</span> mlt<span class="token punctuation">.</span>score<span class="token punctuation">(</span>x_test<span class="token punctuation">,</span> y_test<span class="token punctuation">)</span><span class="token punctuation">)</span>
<span class="token keyword">print</span><span class="token punctuation">(</span><span class="token string">"每个类别的精确率和召回率:"</span><span class="token punctuation">)</span>
<span class="token keyword">print</span><span class="token punctuation">(</span> classification_report<span class="token punctuation">(</span>y_test<span class="token punctuation">,</span> y_predict<span class="token punctuation">,</span> target_names<span class="token operator">=</span>news<span class="token punctuation">.</span>target_names<span class="token punctuation">)</span><span class="token punctuation">)</span>

<span class="token keyword">return</span> <span class="token boolean">None</span>

if name == main:
naviebayes()

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43

输出结果:

预测的文章类别为: [ 3 13  8 ... 16 17  0]
准确率为: 0.8486842105263158
每个类别的精确率和召回率:
                          precision    recall  f1-score   support
         alt.atheism       0.87      0.76      0.81       210
       comp.graphics       0.86      0.79      0.82       232

comp.os.ms-windows.misc 0.84 0.80 0.82 249
comp.sys.ibm.pc.hardware 0.75 0.80 0.78 250
comp.sys.mac.hardware 0.96 0.82 0.89 240
comp.windows.x 0.95 0.83 0.88 246
misc.forsale 0.95 0.73 0.83 237
rec.autos 0.88 0.89 0.89 258
rec.motorcycles 0.95 0.96 0.95 254
rec.sport.baseball 0.96 0.95 0.96 262
rec.sport.hockey 0.94 0.98 0.96 257
sci.crypt 0.70 0.98 0.82 227
sci.electronics 0.82 0.82 0.82 228
sci.med 0.95 0.91 0.93 227
sci.space 0.88 0.94 0.91 235
soc.religion.christian 0.59 0.99 0.74 252
talk.politics.guns 0.73 0.97 0.83 227
talk.politics.mideast 0.93 0.97 0.95 256
talk.politics.misc 0.98 0.57 0.72 215
talk.religion.misc 0.97 0.20 0.33 150

            accuracy                           0.85      4712
           macro avg       0.87      0.83      0.83      4712
        weighted avg       0.87      0.85      0.84      4712
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30

7.评估标准

estimator.score() 一般最常见使用的是准确率,即预测结果正确的百分比
精确率:预测结果为正例样本中真实为正例的比例(查得准)
召回率:真实为正例的样本中预测结果为正例的比例(查的全,对正样本的区分能力)
sklearn.metrics.classification_report(y_true, y_pred, target_names=None)
y_true:真实目标值
y_pred:估计器预测目标值
target_names:目标类别名称
return:每个类别精确率与召回率

8.交叉验证和网格搜索

交叉验证:将拿到的数据,分为训练和验证集。以下图为例:将数据分成5份,其中一份作为验证集。然后经过5次(组)的测试,每次都更换不同的验证集。即得到5组模型的结果,取平均值作为最终结果。又称5折交叉验证。

超参数搜索-网格搜索:通常情况下,有很多参数是需要手动指定的(如k-近邻算法中的K值),这种叫超参数。但是手动过程繁杂,所以需要对模型预设几种超参数组合。每组超参数都采用交叉验证来进行评估。最后选出最优参数组合建立模型。

API:sklearn.model_selection.GridSearchCV
sklearn.model_selection.GridSearchCV(estimator, param_grid=None,cv=None) 对估计器的指定参数值进行详尽搜索

estimator:估计器对象,例如knn param_grid:估计器参数(dict){“n_neighbors”:[1,3,5]} cv:指定几折交叉验证,一般是10折交叉验证

fit:输入训练数据 score:准确率 结果分析: best_score_:在交叉验证中测试的最好结果 best_estimator_:最好的参数模型 cv_results_:每次交叉验证后的测试集准确率结果和训练集准确率结果

9.决策树

决策树思想的来源非常朴素,程序设计中的条件分支结构就是if-then结构,最早的决策树就是利用这类结构分割数据的一种分类学习方法

信息熵:

H ( X ) = ∑ x ∈ X P ( x ) l o g P ( x ) H ( X ) = ∑ x ∈ X P ( x ) l o g P ( x ) H ( X ) = ∑ x ∈ X P ( x ) l o g P ( x ) H(X)=∑x∈XP(x)logP(x)H(X)=∑x∈XP(x)logP(x) H(X)=\sum_{x\in X}P(x)logP(x) H(X)=xXP(x)logP(x)H(X)=xXP(x)logP(x)H(X)=xXP(x)logP(x)H(DA)=i=1nDDiH(Di)=i=1nDDik=1KDiDiklogDiDik

常见决策树使用算法:
ID3
息增益 最大的准则
C4.5
信息增益比 最大的准则
CART
回归树: 平方误差 最小
分类树: 基尼系数 最小的准则 在sklearn中可以选择划分的原则

sklearn决策树API:
class sklearn.tree.DecisionTreeClassifier(criterion=’gini’, max_depth=None,random_state=None)

criterion:默认是’gini’系数,也可以选择信息增益的熵’entropy’
max_depth:树的深度大小
random_state:随机数种子
method:
decision_path:返回决策树的路径

决策树优缺点:
优点:

  • 简单的理解和解释,树木可视化。
  • 需要很少的数据准备,其他技术通常需要数据归一化

缺点:

  • 决策树学习者可以创建不能很好地推广数据的过于复杂的树,这被称为过拟合。
  • 决策树可能不稳定,因为数据的小变化可能会导致完全不同的树被生成

改进:

  • 减枝cart算法,
  • 随机森林

示例:
泰坦尼克号乘客生存分类模型

1、pd读取数据
2、选择有影响的特征,处理缺失值
3、进行特征工程,pd转换字典,特征抽取
x_train.to_dict(orient=“records”)
4、决策树估计器流程
决策树的结构,本地保存:
1、sklearn.tree.export_graphviz() 该函数能够导出DOT格式
tree.export_graphviz(estimator,out_file='tree.dot’,feature_names=[‘’,’’])
2、工具:(能够将dot文件转换为pdf、png)
安装graphviz
ubuntu:sudo apt-get install graphviz Mac:brew install graphviz
3、运行命令
然后我们运行这个命令
$ dot -Tpng tree.dot -o tree.png

from sklearn.feature_extraction import DictVectorizer
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.tree import DecisionTreeClassifier, export_graphviz
from sklearn.ensemble import RandomForestClassifier
import pandas as pd

def decision():
“”"
决策树对泰坦尼克号进行预测生死
:return: None
“”"

# 获取数据
titan = pd.read_csv(“http://biostat.mc.vanderbilt.edu/wiki/pub/Main/DataSets/titanic.txt”)

# print(titan.head(10))

<span class="token comment"># 处理数据,找出特征值和目标值</span>
x <span class="token operator">=</span> titan<span class="token punctuation">[</span><span class="token punctuation">[</span><span class="token string">'pclass'</span><span class="token punctuation">,</span> <span class="token string">'age'</span><span class="token punctuation">,</span> <span class="token string">'sex'</span><span class="token punctuation">]</span><span class="token punctuation">]</span>
y <span class="token operator">=</span> titan<span class="token punctuation">[</span><span class="token string">'survived'</span><span class="token punctuation">]</span>
<span class="token comment"># 缺失值处理</span>
x<span class="token punctuation">[</span><span class="token string">'age'</span><span class="token punctuation">]</span><span class="token punctuation">.</span>fillna<span class="token punctuation">(</span>x<span class="token punctuation">[</span><span class="token string">'age'</span><span class="token punctuation">]</span><span class="token punctuation">.</span>mean<span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">,</span> inplace<span class="token operator">=</span><span class="token boolean">True</span><span class="token punctuation">)</span>

<span class="token keyword">print</span><span class="token punctuation">(</span>x<span class="token punctuation">)</span>
<span class="token keyword">print</span><span class="token punctuation">(</span><span class="token string">"#"</span><span class="token operator">*</span><span class="token number">50</span><span class="token punctuation">)</span>
<span class="token comment"># 分割数据集到训练集合测试集</span>
x_train<span class="token punctuation">,</span> x_test<span class="token punctuation">,</span> y_train<span class="token punctuation">,</span> y_test <span class="token operator">=</span> train_test_split<span class="token punctuation">(</span>x<span class="token punctuation">,</span> y<span class="token punctuation">,</span> test_size<span class="token operator">=</span><span class="token number">0.25</span><span class="token punctuation">)</span>
<span class="token comment"># 进行处理(特征工程)特征-》类别-》one_hot编码</span>
<span class="token builtin">dict</span> <span class="token operator">=</span> DictVectorizer<span class="token punctuation">(</span>sparse<span class="token operator">=</span><span class="token boolean">False</span><span class="token punctuation">)</span>

x_train <span class="token operator">=</span> <span class="token builtin">dict</span><span class="token punctuation">.</span>fit_transform<span class="token punctuation">(</span>x_train<span class="token punctuation">.</span>to_dict<span class="token punctuation">(</span>orient<span class="token operator">=</span><span class="token string">"records"</span><span class="token punctuation">)</span><span class="token punctuation">)</span>

<span class="token keyword">print</span><span class="token punctuation">(</span><span class="token builtin">dict</span><span class="token punctuation">.</span>get_feature_names<span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">)</span>
<span class="token keyword">print</span><span class="token punctuation">(</span>x_train<span class="token punctuation">)</span>
<span class="token keyword">print</span><span class="token punctuation">(</span><span class="token string">"#"</span><span class="token operator">*</span><span class="token number">50</span><span class="token punctuation">)</span>

x_test <span class="token operator">=</span> <span class="token builtin">dict</span><span class="token punctuation">.</span>transform<span class="token punctuation">(</span>x_test<span class="token punctuation">.</span>to_dict<span class="token punctuation">(</span>orient<span class="token operator">=</span><span class="token string">"records"</span><span class="token punctuation">)</span><span class="token punctuation">)</span>


<span class="token comment"># 用决策树进行预测</span>
dec <span class="token operator">=</span> DecisionTreeClassifier<span class="token punctuation">(</span><span class="token punctuation">)</span>

# dec = DecisionTreeClassifier(max_depth=10)

dec<span class="token punctuation">.</span>fit<span class="token punctuation">(</span>x_train<span class="token punctuation">,</span> y_train<span class="token punctuation">)</span>

<span class="token comment"># 预测准确率</span>
<span class="token keyword">print</span><span class="token punctuation">(</span><span class="token string">"预测的准确率:"</span><span class="token punctuation">,</span> dec<span class="token punctuation">.</span>score<span class="token punctuation">(</span>x_test<span class="token punctuation">,</span> y_test<span class="token punctuation">)</span><span class="token punctuation">)</span>

<span class="token comment"># 导出决策树的结构</span>
export_graphviz<span class="token punctuation">(</span>dec<span class="token punctuation">,</span> out_file<span class="token operator">=</span><span class="token string">"./tree.dot"</span><span class="token punctuation">,</span> feature_names<span class="token operator">=</span><span class="token punctuation">[</span><span class="token string">'年龄'</span><span class="token punctuation">,</span> <span class="token string">'pclass=1st'</span><span class="token punctuation">,</span> <span class="token string">'pclass=2nd'</span><span class="token punctuation">,</span> <span class="token string">'pclass=3rd'</span><span class="token punctuation">,</span> <span class="token string">'女性'</span><span class="token punctuation">,</span> <span class="token string">'男性'</span><span class="token punctuation">]</span><span class="token punctuation">)</span>

<span class="token keyword">return</span> <span class="token boolean">None</span>

if name == main:
decision()

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55

输出结果:

     pclass        age     sex
0       1st  29.000000  female
1       1st   2.000000  female
2       1st  30.000000    male
3       1st  25.000000  female
4       1st   0.916700    male
...     ...        ...     ...
1308    3rd  31.194181    male
1309    3rd  31.194181    male
1310    3rd  31.194181    male
1311    3rd  31.194181  female
1312    3rd  31.194181    male

[1313 rows x 3 columns]
##################################################
[‘age’, ‘pclass=1st’, ‘pclass=2nd’, ‘pclass=3rd’, ‘sex=female’, ‘sex=male’]
[[31.19418104 0. 0. 1. 1. 0. ]
[31.19418104 0. 0. 1. 0. 1. ]
[24. 1. 0. 0. 0. 1. ]

[31.19418104 0. 0. 1. 0. 1. ]
[52. 1. 0. 0. 1. 0. ]
[38. 1. 0. 0. 0. 1. ]]
##################################################
预测的准确率: 0.8267477203647416

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26

10.随机森林

随机森林是集成学习方法,集成学习通过建立几个模型组合的来解决单一预测问题。它的工作原理是生成多个分类器/模型,各自独立地学习和作出预测。这些预测最后结合成单预测,因此优于任何一个单分类的做出预测。

定义:在机器学习中,随机森林是一个包含多个决策树的分类器,并且其输出的类别是由个别树输出的类别的众数而定。

学习算法
根据下列算法而建造单棵树的过程:
1.用N来表示训练用例(样本)的个数,M表示特征数目。
2.输入特征数目m,用于确定决策树上一个节点的决策结果;其中m应远小于M。
3.从N个训练用例(样本)中以有放回抽样的方式,取样N次,形成一个训练集(即bootstrap取样,这里面可能有重复的样本),并用未抽到的用例(样本)作预测,评估其误差。

为什么要随机抽样训练集?  
如果不进行随机抽样,每棵树的训练集都一样,那么最终训练出的树分类结果也是完全一样的

为什么要有放回地抽样?
如果不是有放回的抽样,那么每棵树的训练样本都是不同的,都是没有交集的,这样每棵树都是“有偏的”,都是绝对“片面的”(当然这样说可能不对),也就是说每棵树训练出来都是有很大的差异的;而随机森林最后分类取决于多棵树(弱分类器)的投票表决。

随机森林API:
class sklearn.ensemble.RandomForestClassifier(n_estimators=10, criterion=’gini’,
max_depth=None, bootstrap=True, random_state=None)
随机森林分类器
n_estimators:integer,optional(default = 10) 森林里的树木数量
criteria:string,可选(default =“gini”)分割特征的测量方法
max_depth:integer或None,可选(默认=无)树的最大深度
bootstrap:boolean,optional(default = True)是否在构建树时使用放回抽样

随机森林的优点:

  • 在当前所有算法中,具有极好的准确率
  • 能够有效地运行在大数据集上
  • 能够处理具有高维特征的输入样本,而且不需要降维
  • 能够评估各个特征在分类问题上的重要性
  • 对于缺省值问题也能够获得很好得结果

示例:

from sklearn.feature_extraction import DictVectorizer
from sklearn.model_selection import train_test_split, GridSearchCV
from sklearn.tree import DecisionTreeClassifier, export_graphviz
from sklearn.ensemble import RandomForestClassifier
import pandas as pd

def decision():
“”"
随机森林对泰坦尼克号进行预测生死
:return: None
“”"

# 获取数据
titan = pd.read_csv(“http://biostat.mc.vanderbilt.edu/wiki/pub/Main/DataSets/titanic.txt”)
# 处理数据,找出特征值和目标值
x = titan[[‘pclass’, ‘age’, ‘sex’]]
y = titan[‘survived’]

<span class="token comment"># 缺失值处理</span>
x<span class="token punctuation">[</span><span class="token string">'age'</span><span class="token punctuation">]</span><span class="token punctuation">.</span>fillna<span class="token punctuation">(</span>x<span class="token punctuation">[</span><span class="token string">'age'</span><span class="token punctuation">]</span><span class="token punctuation">.</span>mean<span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">,</span> inplace<span class="token operator">=</span><span class="token boolean">True</span><span class="token punctuation">)</span>   
<span class="token keyword">print</span><span class="token punctuation">(</span>x<span class="token punctuation">)</span>
<span class="token keyword">print</span><span class="token punctuation">(</span><span class="token string">"#"</span><span class="token operator">*</span><span class="token number">50</span><span class="token punctuation">)</span>
<span class="token comment"># 分割数据集到训练集合测试集</span>
x_train<span class="token punctuation">,</span> x_test<span class="token punctuation">,</span> y_train<span class="token punctuation">,</span> y_test <span class="token operator">=</span> train_test_split<span class="token punctuation">(</span>x<span class="token punctuation">,</span> y<span class="token punctuation">,</span> test_size<span class="token operator">=</span><span class="token number">0.25</span><span class="token punctuation">)</span>
<span class="token comment"># 进行处理(特征工程)特征-》类别-》one_hot编码</span>
<span class="token builtin">dict</span> <span class="token operator">=</span> DictVectorizer<span class="token punctuation">(</span>sparse<span class="token operator">=</span><span class="token boolean">False</span><span class="token punctuation">)</span>
x_train <span class="token operator">=</span> <span class="token builtin">dict</span><span class="token punctuation">.</span>fit_transform<span class="token punctuation">(</span>x_train<span class="token punctuation">.</span>to_dict<span class="token punctuation">(</span>orient<span class="token operator">=</span><span class="token string">"records"</span><span class="token punctuation">)</span><span class="token punctuation">)</span>
<span class="token keyword">print</span><span class="token punctuation">(</span><span class="token builtin">dict</span><span class="token punctuation">.</span>get_feature_names<span class="token punctuation">(</span><span class="token punctuation">)</span><span class="token punctuation">)</span>
<span class="token keyword">print</span><span class="token punctuation">(</span>x_train<span class="token punctuation">)</span>
<span class="token keyword">print</span><span class="token punctuation">(</span><span class="token string">"#"</span><span class="token operator">*</span><span class="token number">50</span><span class="token punctuation">)</span>
x_test <span class="token operator">=</span> <span class="token builtin">dict</span><span class="token punctuation">.</span>transform<span class="token punctuation">(</span>x_test<span class="token punctuation">.</span>to_dict<span class="token punctuation">(</span>orient<span class="token operator">=</span><span class="token string">"records"</span><span class="token punctuation">)</span><span class="token punctuation">)</span>

<span class="token comment"># 随机森林进行预测 (超参数调优)</span>
rf <span class="token operator">=</span> RandomForestClassifier<span class="token punctuation">(</span><span class="token punctuation">)</span>
param <span class="token operator">=</span> <span class="token punctuation">{</span><span class="token string">"n_estimators"</span><span class="token punctuation">:</span> <span class="token punctuation">[</span><span class="token number">120</span><span class="token punctuation">,</span> <span class="token number">200</span><span class="token punctuation">,</span> <span class="token number">300</span><span class="token punctuation">,</span> <span class="token number">500</span><span class="token punctuation">,</span> <span class="token number">800</span><span class="token punctuation">,</span> <span class="token number">1200</span><span class="token punctuation">]</span><span class="token punctuation">,</span> <span class="token string">"max_depth"</span><span class="token punctuation">:</span> <span class="token punctuation">[</span><span class="token number">5</span><span class="token punctuation">,</span> <span class="token number">8</span><span class="token punctuation">,</span> <span class="token number">15</span><span class="token punctuation">,</span> <span class="token number">25</span><span class="token punctuation">,</span> <span class="token number">30</span><span class="token punctuation">]</span><span class="token punctuation">}</span>

<span class="token comment"># 网格搜索与交叉验证</span>
gc <span class="token operator">=</span> GridSearchCV<span class="token punctuation">(</span>rf<span class="token punctuation">,</span> param_grid<span class="token operator">=</span>param<span class="token punctuation">,</span> cv<span class="token operator">=</span><span class="token number">2</span><span class="token punctuation">)</span>
gc<span class="token punctuation">.</span>fit<span class="token punctuation">(</span>x_train<span class="token punctuation">,</span> y_train<span class="token punctuation">)</span>

<span class="token keyword">print</span><span class="token punctuation">(</span><span class="token string">"准确率:"</span><span class="token punctuation">,</span> gc<span class="token punctuation">.</span>score<span class="token punctuation">(</span>x_test<span class="token punctuation">,</span> y_test<span class="token punctuation">)</span><span class="token punctuation">)</span>
<span class="token keyword">print</span><span class="token punctuation">(</span><span class="token string">"查看选择的参数模型:"</span><span class="token punctuation">,</span> gc<span class="token punctuation">.</span>best_params_<span class="token punctuation">)</span>

<span class="token keyword">return</span> <span class="token boolean">None</span>

if name == main:
decision()

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49

输出结果:

     pclass        age     sex
0       1st  29.000000  female
1       1st   2.000000  female
2       1st  30.000000    male
3       1st  25.000000  female
4       1st   0.916700    male
...     ...        ...     ...
1308    3rd  31.194181    male
1309    3rd  31.194181    male
1310    3rd  31.194181    male
1311    3rd  31.194181  female
1312    3rd  31.194181    male

[1313 rows x 3 columns]
##################################################
[‘age’, ‘pclass=1st’, ‘pclass=2nd’, ‘pclass=3rd’, ‘sex=female’, ‘sex=male’]
[[31.19418104 0. 0. 1. 1. 0. ]
[20. 1. 0. 0. 1. 0. ]
[39. 1. 0. 0. 1. 0. ]

[19. 1. 0. 0. 1. 0. ]
[19. 0. 1. 0. 1. 0. ]
[31.19418104 0. 0. 1. 0. 1. ]]
##################################################
准确率: 0.817629179331307
查看选择的参数模型: {‘max_depth’: 5, ‘n_estimators’: 200}

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28

未完


                                </div>
            <link href="https://csdnimg.cn/release/phoenix/mdeditor/markdown_views-e44c3c0e64.css" rel="stylesheet">
                </div>
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值