【机器学习】python机器学习使用scikit-learn对模型进行微调:使用RandomizedSearchCV对pipline进行参数选择

36 篇文章 5 订阅
27 篇文章 3 订阅

端到端机器学习导航:
【机器学习】python借助pandas加载并显示csv数据文件,并绘制直方图
【机器学习】python使用matplotlib进行二维数据绘图并保存为png图片
【机器学习】python借助pandas及scikit-learn使用三种方法分割训练集及测试集
【机器学习】python借助pandas及matplotlib将输入数据可视化,并计算相关性
【机器学习】python机器学习借助scikit-learn进行数据预处理工作:缺失值填补,文本处理(一)
【机器学习】python机器学习scikit-learn和pandas进行Pipeline处理工作:归一化和标准化及自定义转换器(二)
【机器学习】python机器学习使用scikit-learn评估模型:基于普通抽样及分层抽样的k折交叉验证做模型选择
【机器学习】python机器学习使用scikit-learn对模型进行微调:使用GridSearchCV及RandomizedSearchCV
【机器学习】python机器学习使用scikit-learn对模型进行评估:使用t分布及z分布评估模型误差的95%置信空间
【机器学习】python机器学习使用scikit-learn对模型进行微调:RandomizedSearchCV的分布参数设置
【机器学习】python机器学习使用scikit-learn对模型进行微调:按特征贡献大小保留最重要k个特征的transform
【机器学习】python机器学习使用scikit-learn对模型进行微调:使用RandomizedSearchCV对pipline进行参数选择

数据准备:

import os

HOUSING_PATH = os.path.join("datasets", "housing")
import pandas as pd

def load_housing_data(housing_path=HOUSING_PATH):
    csv_path = os.path.join(housing_path, "housing.csv")
    return pd.read_csv(csv_path)
housing=load_housing_data()
housing2=housing.copy()
import numpy as np

#预处理前去掉带文字的指定列
from sklearn.preprocessing import MinMaxScaler,StandardScaler,OneHotEncoder

from sklearn.impute import SimpleImputer
housing_num = housing2.drop("ocean_proximity", axis=1)
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
num_pipeline = Pipeline([
        ('imputer', SimpleImputer(strategy="median")),
        ('std_scaler', StandardScaler())
    ])



from sklearn.compose import ColumnTransformer
#返回所有列名
num_attribs = list(housing_num)

cat_attribs = ["ocean_proximity"]
#找出待独热编码列的最大分类数,不然在进行测试集划分处理时,
#容易造成独热向量因测试集构成不同而列数不一致的情况
categories=housing2['ocean_proximity'].unique()
full_pipeline = ColumnTransformer([
        ("num", num_pipeline, num_attribs),
        ("cat", OneHotEncoder(categories=[categories]), cat_attribs),
    ])
#抽样后的数据,去除预测目标列,并拿出对应目标列准备数据训练
housing_labels = housing2["median_house_value"]

housing_prepared = full_pipeline.fit_transform(housing2)
from sklearn.model_selection import train_test_split
train_set, test_set,train_sety, test_sety = train_test_split(housing_prepared,housing_labels, test_size=0.1, random_state=42)

对特征选择数及缺失值的填补规则进行搜索:
对于特征选择可详见博文:【机器学习】python机器学习使用scikit-learn对模型进行微调:决策树按特征贡献大小保留最重要的k个特征的transform

from sklearn.model_selection import RandomizedSearchCV
from scipy.stats import randint
from scipy.stats import expon, reciprocal
from sklearn.svm import SVR
from sklearn.ensemble import RandomForestRegressor
import matplotlib.pyplot as plt
import numpy as np

forest_reg = RandomForestRegressor(random_state=42)

forest_reg.fit(train_set[:1000], train_sety[:1000])
print(forest_reg.feature_importances_)
importantArray=forest_reg.feature_importances_

from sklearn.base import BaseEstimator, TransformerMixin

def indices_of_top_k(arr, k):
    #np.argpartition ()将传入的数组arr分成 两部分 ,即:排在第k位置前面的数都小于k位置,排在第k位置后面的值都大于k位置
    #np.argpartition(np.array(arr), -k)保证最后k个数大于k位置的数,[-k:]取最后的k个值,这个组合求数组内最大的k个数
    #该函数返回的是下标,再按下标进行排序
    return np.sort(np.argpartition(np.array(arr), -k)[-k:])

class TopFeatureSelector(BaseEstimator, TransformerMixin):
    #feature_importances为特证重要度,越大重要度越高
    def __init__(self, feature_importances, kparam):
        self.feature_importances = feature_importances
        self.kparam = kparam
    def fit(self, X, y=None):
        self.feature_indices_ = indices_of_top_k(self.feature_importances, self.kparam)
        return self
    def transform(self, X):
        return X[:, self.feature_indices_]

#有了重要特征后重新训练:
full_pipeline2=  Pipeline([
        ('prepared', full_pipeline),
        #使用参数搜索时,会以指定搜索参数覆盖掉这里的参数5
        ("filter",TopFeatureSelector(importantArray,5)),
        ("result",RandomForestRegressor(random_state=42))
    ])
#关键是参数分布的填写:
param_distribs2 = {

        #注意,不能只是形参名,需要保证带有self.kparam的参数
        'filter__kparam': randint(low=1, high=15),
        #出现pipeline嵌套的情况时,使用双下划线连接pipline的step名,
        #将具体参数名加在最后,对于搜索范围是list的超参数,在给定的list中等概率采样
        'prepared__num__imputer__strategy': ['mean', 'median', 'most_frequent'],
    }
rnd_search = RandomizedSearchCV(full_pipeline2, param_distributions=param_distribs2,
                                n_iter=10, cv=2, scoring='neg_mean_squared_error',
                                verbose=2, random_state=42)
rnd_search.fit(housing2,housing_labels)
rnd_search.best_params_

程序输出结果为:

Fitting 2 folds for each of 10 candidates, totalling 20 fits
[CV] END filter__kparam=7, prepared__num__imputer__strategy=mean; total time=   3.1s
[CV] END filter__kparam=7, prepared__num__imputer__strategy=mean; total time=   3.0s
[CV] END filter__kparam=11, prepared__num__imputer__strategy=mean; total time=   4.1s
[CV] END filter__kparam=11, prepared__num__imputer__strategy=mean; total time=   3.9s
[CV] END filter__kparam=5, prepared__num__imputer__strategy=most_frequent; total time=   2.2s
[CV] END filter__kparam=5, prepared__num__imputer__strategy=most_frequent; total time=   2.1s
[CV] END filter__kparam=10, prepared__num__imputer__strategy=most_frequent; total time=   3.8s
[CV] END filter__kparam=10, prepared__num__imputer__strategy=most_frequent; total time=   3.7s
[CV] END filter__kparam=7, prepared__num__imputer__strategy=most_frequent; total time=   2.9s
[CV] END filter__kparam=7, prepared__num__imputer__strategy=most_frequent; total time=   3.0s
[CV] END filter__kparam=11, prepared__num__imputer__strategy=mean; total time=   3.9s
[CV] END filter__kparam=11, prepared__num__imputer__strategy=mean; total time=   4.1s
[CV] END filter__kparam=4, prepared__num__imputer__strategy=most_frequent; total time=   1.8s
[CV] END filter__kparam=4, prepared__num__imputer__strategy=most_frequent; total time=   1.7s
[CV] END filter__kparam=6, prepared__num__imputer__strategy=mean; total time=   2.6s
[CV] END filter__kparam=6, prepared__num__imputer__strategy=mean; total time=   2.5s
[CV] END filter__kparam=2, prepared__num__imputer__strategy=median; total time=   1.2s
[CV] END filter__kparam=2, prepared__num__imputer__strategy=median; total time=   1.2s
[CV] END filter__kparam=6, prepared__num__imputer__strategy=median; total time=   2.5s
[CV] END filter__kparam=6, prepared__num__imputer__strategy=median; total time=   2.5s
#最终的最优参数为:
{'filter__kparam': 2, 'prepared__num__imputer__strategy': 'median'}
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

颢师傅

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值