模型选择与参数寻优

一、解决数据量大, 读取操作数据慢的问题

reduce_mem_usage 函数通过调整数据类型,帮助我们减少数据在内存中占用的空间, 在未调整之前, 特征的数据类型固定为int16, float32, object, datetime等类型, 而有些特征不需要16位或32位来存储, 对每一列特征进行区间判断, 分配合适的存储单元, 有效降低内存占用空间.(提高内存使用率, 降低效率)

def reduce_mem_usage(df):
    """ iterate through all the columns of a dataframe and modify the data type
        to reduce memory usage.        
    """
    start_mem = df.memory_usage().sum() 
    print('Memory usage of dataframe is {:.2f} MB'.format(start_mem))
    
    for col in df.columns:
        col_type = df[col].dtype
        
        if col_type != object:
            c_min = df[col].min()
            c_max = df[col].max()
            if str(col_type)[:3] == 'int':
                if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max:
                    df[col] = df[col].astype(np.int8)
                elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max:
                    df[col] = df[col].astype(np.int16)
                elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max:
                    df[col] = df[col].astype(np.int32)
                elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max:
                    df[col] = df[col].astype(np.int64)  
            else:
                if c_min > np.finfo(np.float16).min and c_max < np.finfo(np.float16).max:
                    df[col] = df[col].astype(np.float16)
                elif c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max:
                    df[col] = df[col].astype(np.float32)
                else:
                    df[col] = df[col].astype(np.float64)
        else:
            df[col] = df[col].astype('category')

    end_mem = df.memory_usage().sum() 
    print('Memory usage after optimization is: {:.2f} MB'.format(end_mem))
    print('Decreased by {:.1f}%'.format(100 * (start_mem - end_mem) / start_mem))
    return df

二、线性模型分析

tricks1: 查看线性模型的权重排序

sorted(dict(zip(continuous_feature_names, model.coef_)).items(), key=lambda x:x[1], reverse=True)

tricks2: 标签长尾分布 log变化成正态分布

train_y_log = np.log(train_y + 1)

长尾分布
(a) 标签 未log变化前的长尾分布
正态分布
(2) 标签log变化后正态分布
在这里插入图片描述
©变化前的拟合情况(特征有很多, 为可视化效果明显, 选择一个特征, 可见预测值出现负值, 拟合效果差等问题)
在这里插入图片描述
(d) 变化后的拟合情况

三、交叉验证概念

因为在实际的训练中,训练的结果对于训练集的拟合程度通常还是挺好的(初始条件敏感),但是对于训练集之外的数据的拟合程度通常就不那么令人满意了。因此我们通常并不会把所有的数据集都拿来训练,而是分出一部分来(这一部分不参加训练)对训练集生成的参数进行测试,相对客观的判断这些参数对训练集之外的数据的符合程度。这种思想就称为交叉验证(Cross Validation)

四、绘制拟合曲线

def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None,n_jobs=1, train_size=np.linspace(.1, 1.0, 5 )):  
    plt.figure()  
    plt.title(title)  
    if ylim is not None:  
        plt.ylim(*ylim)  
    plt.xlabel('Training example')  
    plt.ylabel('score')  
    train_sizes, train_scores, test_scores = learning_curve(estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_size, scoring = make_scorer(mean_absolute_error))  
    train_scores_mean = np.mean(train_scores, axis=1)  
    train_scores_std = np.std(train_scores, axis=1)  
    test_scores_mean = np.mean(test_scores, axis=1)  
    test_scores_std = np.std(test_scores, axis=1)  
    plt.grid()#区域  
    plt.fill_between(train_sizes, train_scores_mean - train_scores_std,  
                     train_scores_mean + train_scores_std, alpha=0.1,  
                     color="r")  
    plt.fill_between(train_sizes, test_scores_mean - test_scores_std,  
                     test_scores_mean + test_scores_std, alpha=0.1,  
                     color="g")  
    plt.plot(train_sizes, train_scores_mean, 'o-', color='r',  
             label="Training score")  
    plt.plot(train_sizes, test_scores_mean,'o-',color="g",  
             label="Cross-validation score")  
    plt.legend(loc="best")  
    return plt  
    
plot_learning_curve(LinearRegression(), 'Liner_model', train_X[:1000], train_y_ln[:1000], ylim=(0.0, 0.5), cv=5, n_jobs=1)  

绘制准确率曲线
在这里插入图片描述

五、线性模型 & 嵌入式特征选择

在过滤式和包裹式特征选择方法中,特征选择过程与学习器训练过程有明显的分别。而嵌入式特征选择在学习器训练过程中自动地进行特征选择。

除此之外,决策树通过信息熵或GINI指数选择分裂节点时,优先选择的分裂特征也更加重要,这同样是一种特征选择的方法。XGBoost与LightGBM模型中的model_importance指标正是基于此计算的

六、模型调参

1、贪心算法
# (1) 设计需要搜索的参数
n_estimators = [50,60]
max_depth = [7,10]
min_samples_split = [2,3]
# (2) 串联遍历每一个超参数
best_estimators = dict()
for n in n_estimators:
    model = RandomForestRegressor(n_estimators=n)
    score = np.mean(cross_val_score(model, X=train_X, y=train_y_ln, verbose=0, cv = 3, scoring=make_scorer(mean_absolute_error)))
    best_estimators[n] = score
best_estimators

best_depth = dict()
for dep in max_depth:
    model = RandomForestRegressor(n_estimators=min(best_estimators.items(), key=lambda x:x[1])[0], max_depth=dep)
    score = np.mean(cross_val_score(model, X=train_X, y=train_y_ln, verbose=0, cv = 3, scoring=make_scorer(mean_absolute_error)))
    best_depth[dep] = score
best_depth

best_split = dict()
for s in min_samples_split:
    model = RandomForestRegressor(n_estimators=min(best_estimators.items(), key=lambda x:x[1])[0],
                          max_depth=min(best_depth.items(), key=lambda x:x[1])[0],
                          min_samples_split=s)
    score = np.mean(cross_val_score(model, X=train_X, y=train_y_ln, verbose=0, cv = 3, scoring=make_scorer(mean_absolute_error)))
    best_split[s] = score
best_split
# (3) 绘制score变化趋势
sns.lineplot(x=['0_initial','1_turning_estimators','2_turning_depth','3_turning_samples_split'], y=[0.142 ,min(best_estimators.values()), min(best_depth.values()), min(best_split.values())])
2、网格搜索
# (1) 查询
from sklearn.model_selection import GridSearchCV
parameters = {'n_estimators': n_estimators , 'max_depth': max_depth, 'min_samples_split': main_samples_split}
model = RandomForestRegressor()
clf = GridSearchCV(model, parameters, cv=5)
clf = clf.fit(train_X, train_y)
print(clf.best_params_)
# (2) 最优组合训练
model =  RandomForestRegressor(n_estimators= $clf.best_params_,
                          num_leaves= $clf.best_params_,
                          max_depth= $clf.best_params_)
np.mean(cross_val_score(model, X=train_X, y=train_y_ln, verbose=0, cv = 5, scoring=make_scorer(mean_absolute_error)))
3、贝叶斯调参
from bayes_opt import BayesianOptimization

def rf_cv(n_estimators, max_depth, min_samples_split):
    val = cross_val_score(
        RandomForestRegressor(n_estimators = int(n_estimators),
            max_depth=int(max_depth) ,
            min_samples_split=int(min_samples_split),
        ),
        X=train_X, y=train_y_ln, verbose=0, cv = 5, scoring=make_scorer(mean_absolute_error)
    ).mean()
    return 1 - val
rf_bo = BayesianOptimization(
    rf_cv,
    {
    'n_estimators': (60, 100),
    'max_depth': (10, 25),
    'min_samples_split': (2, 4),
    }
)
rf_bo.maximize()
print(1 - rf_bo.max['target'])

参考文章:
1、https://tianchi.aliyun.com/notebook-ai/detail?postId=95460

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值