2023.8夏令营“用户新增预测”学习笔记(五)

今日任务--特征工程

        特征工程指的是把原始数据转变为模型训练数据的过程,目的是获取更好的训练数据特征。特征工程能使得模型的性能得到提升,有时甚至在简单的模型上也能取得不错的效果。

 提取新特征

# 导入第三方库
import pandas as pd
import numpy as np
from sklearn.tree import DecisionTreeClassifier
from sklearn.model_selection import cross_val_predict
from sklearn.metrics import classification_report
import matplotlib.pyplot as plt

# 忽略警告
import warnings
warnings.filterwarnings('ignore')

# 解决画图中文字体显示的问题
plt.rcParams['font.sans-serif'] = ['SimSun', 'Times New Roman']
plt.rcParams['font.size'] = 10
plt.rcParams['axes.unicode_minus'] = False

        首先导入库与相关模块,做好模型训练、交叉验证、结果评估和特征重要性可视化的准备。这次仍然使用决策树模型。

# 读取训练集和测试集文件
train_data = pd.read_csv('E:/编程/用户新增预测大赛/train.csv')
test_data = pd.read_csv('E:/编程/用户新增预测大赛/test.csv')


# 提取udmap特征,人工进行onehot
def udmap_onethot(d):
    v = np.zeros(9)
    if d == 'unknown':
        return v
    d = eval(d)
    for i in range(1, 10):
        if 'key' + str(i) in d:
            v[i - 1] = d['key' + str(i)]

    return v


train_udmap_df = pd.DataFrame(np.vstack(train_data['udmap'].apply(udmap_onethot)))
test_udmap_df = pd.DataFrame(np.vstack(test_data['udmap'].apply(udmap_onethot)))
train_udmap_df.columns = ['key' + str(i) for i in range(1, 10)]
test_udmap_df.columns = ['key' + str(i) for i in range(1, 10)]

# udmap特征和原始数据拼接
train_data = pd.concat([train_data, train_udmap_df], axis=1)
test_data = pd.concat([test_data, test_udmap_df], axis=1)

# 提取eid的频次特征
train_data['eid_freq'] = train_data['eid'].map(train_data['eid'].value_counts())
test_data['eid_freq'] = test_data['eid'].map(train_data['eid'].value_counts())

# 提取eid的标签特征
train_data['eid_mean'] = train_data['eid'].map(train_data.groupby('eid')['target'].mean())
test_data['eid_mean'] = test_data['eid'].map(train_data.groupby('eid')['target'].mean())

train_data['eid_std'] = train_data['eid'].map(train_data.groupby('eid')['target'].std())
test_data['eid_std'] = test_data['eid'].map(train_data.groupby('eid')['target'].std())

# 提取x1~x8的频次特征和标签特征
for i in range(1, 9):
    train_data['x' + str(i) + '_freq'] = train_data['x' + str(i)].map(train_data['x' + str(i)].value_counts())
    test_data['x' + str(i) + '_freq'] = test_data['x' + str(i)].map(train_data['x' + str(i)].value_counts())
    train_data['x' + str(i) + '_mean'] = train_data['x' + str(i)].map(train_data.groupby('x' + str(i))['target'].mean())
    test_data['x' + str(i) + '_mean'] = test_data['x' + str(i)].map(train_data.groupby('x' + str(i))['target'].mean())
    train_data['x' + str(i) + '_std'] = train_data['x' + str(i)].map(train_data.groupby('x' + str(i))['target'].std())
    test_data['x' + str(i) + '_std'] = test_data['x' + str(i)].map(train_data.groupby('x' + str(i))['target'].std())

# 提取key1~key9的频次特征和标签特征
for i in range(1, 10):
    train_data['key'+str(i)+'_freq'] = train_data['key'+str(i)].map(train_data['key'+str(i)].value_counts())
    test_data['key'+str(i)+'_freq'] = test_data['key'+str(i)].map(train_data['key'+str(i)].value_counts())
    train_data['key'+str(i)+'_mean'] = train_data['key'+str(i)].map(train_data.groupby('key'+str(i))['target'].mean())
    test_data['key'+str(i)+'_mean'] = test_data['key'+str(i)].map(train_data.groupby('key'+str(i))['target'].mean())
    train_data['key'+str(i)+'_std'] = train_data['key'+str(i)].map(train_data.groupby('key'+str(i))['target'].std())
    test_data['key'+str(i)+'_std'] = test_data['key'+str(i)].map(train_data.groupby('key'+str(i))['target'].std())

train_data = train_data.fillna(0)
test_data = test_data.fillna(0)

# 提取时间戳
train_data['common_ts'] = pd.to_datetime(train_data['common_ts'], unit='ms')
test_data['common_ts'] = pd.to_datetime(test_data['common_ts'], unit='ms')
train_data['common_ts_hour'] = train_data['common_ts'].dt.hour
test_data['common_ts_hour'] = test_data['common_ts'].dt.hour
train_data['common_ts_day'] = train_data['common_ts'].dt.day
test_data['common_ts_day'] = test_data['common_ts'].dt.day
train_data['common_ts_minute'] = train_data['common_ts'].dt.minute
test_data['common_ts_minute'] = test_data['common_ts'].dt.minute

# 编码udmap是否为空
train_data['udmap_isunknown'] = (train_data['udmap'] == 'unknown').astype(int)
test_data['udmap_isunknown'] = (test_data['udmap'] == 'unknown').astype(int)

        这一次新引入了x{i}、key{i}字段的频率、平均值、标准差特征,并在小时特征的基础上新增了天和分钟特征。接下来进行交叉验证。

交叉验证

pred = cross_val_predict(
    DecisionTreeClassifier(),
    train_data.drop(['udmap', 'common_ts', 'uuid', 'target'], axis=1),
    train_data['target']
)
print(classification_report(train_data['target'], pred, digits=3))

        然后对比引入前后结果。

引入前

 引入后

        可见,新增特征后模型精度显著提高了。接下来进行特征重要性可视化。

特征重要性可视化

# 获取字段列表
l0 = ['x1_freq', 'x2_freq', 'x3_freq', 'x4_freq', 'x5_freq', 'x6_freq', 'x7_freq', 'x8_freq',
      'x1_mean', 'x2_mean', 'x3_mean', 'x4_mean', 'x5_mean', 'x6_mean', 'x7_mean', 'x8_mean',
      'x1_std', 'x2_std', 'x3_std', 'x4_std', 'x5_std', 'x6_std', 'x7_std', 'x8_std',
      'key1_freq', 'key2_freq', 'key3_freq', 'key4_freq', 'key5_freq', 'key6_freq', 'key7_freq', 'key8_freq', 'key9_freq',
      'key1_mean', 'key2_mean', 'key3_mean', 'key4_mean','key5_mean', 'key6_mean', 'key7_mean', 'key8_mean', 'key9_mean',
      'key1_std', 'key2_std', 'key3_std', 'key4_std', 'key5_std', 'key6_std', 'key7_std', 'key8_std', 'key9_std',
      'unmap_isunknown', 'udmap', 'common_ts', 'uuid', 'target', 'common_ts_hour', 'common_ts_day', 'common_ts_minute',
      'x1', 'x2', 'x3', 'x4', 'x5', 'x6', 'x7', 'x8',
      'eid', 'eid_std', 'eid_mean', 'eid_freq',
      'key1', 'key2', 'key3', 'key4', 'key5', 'key6', 'key7', 'key8', 'key9'
      ]

# 训练模型:按需分组选取特征
x = train_data.drop(['x1_freq', 'x2_freq', 'x3_freq', 'x4_freq', 'x5_freq', 'x6_freq', 'x7_freq', 'x8_freq',
                     'x1_mean', 'x2_mean', 'x3_mean', 'x4_mean', 'x5_mean', 'x6_mean', 'x7_mean', 'x8_mean',
                     'x1_std', 'x2_std', 'x3_std', 'x4_std', 'x5_std', 'x6_std', 'x7_std', 'x8_std',
                     'key1_freq', 'key2_freq', 'key3_freq', 'key4_freq', 'key5_freq', 'key6_freq', 'key7_freq', 'key8_freq','key9_freq',
                     'key1_mean', 'key2_mean', 'key3_mean', 'key4_mean','key5_mean', 'key6_mean', 'key7_mean', 'key8_mean','key9_mean',
                     'key1_std', 'key2_std', 'key3_std', 'key4_std', 'key5_std', 'key6_std', 'key7_std', 'key8_std', 'key9_std',
                     'udmap', 'common_ts', 'uuid', 'target', 'common_ts_hour', 'common_ts_day', 'common_ts_minute',
                     'x1', 'x2', 'x3', 'x4', 'x5', 'x6', 'x7', 'x8',
                     'eid', 'eid_std', 'eid_mean', 'eid_freq',
                     'key1', 'key2', 'key3', 'key4', 'key5', 'key6', 'key7', 'key8', 'key9'
                      ], axis=1)
y = train_data['target']
clf = DecisionTreeClassifier()
clf.fit(x, y)

# 获取特征重要性得分
feature_importances = clf.feature_importances_

# 创建特征名列表
feature_names = list(x.columns)

# 创建一个DataFrame,包含特征名和其重要性得分
feature_importances_df = pd.DataFrame({'feature': feature_names, 'importance': feature_importances})

# 对特征重要性得分进行排序
feature_importances_df = feature_importances_df.sort_values('importance', ascending=False)

# 颜色映射
colors = plt.cm.viridis(np.linspace(0, 1, len(feature_names)))

# 可视化特征重要性
fig, ax = plt.subplots(figsize=(10, 6))
ax.barh(feature_importances_df['feature'], feature_importances_df['importance'], color=colors)
ax.invert_yaxis()  # 翻转y轴,使得最大的特征在最上面
ax.set_xlabel('特征重要性', fontsize=12)  # 图形的x标签
ax.set_title('决策树特征重要性可视化', fontsize=16)
for i, v in enumerate(feature_importances_df['importance']):
    ax.text(v + 0.01, i, str(round(v, 3)), va='center', fontname='Times New Roman', fontsize=10)

# 保存图形
plt.savefig('./特征重要性.jpg', dpi=400, bbox_inches='tight')
plt.show()

        得到的重要性排序如下。

 

         由于x_std,key_std数据量过大,无法绘制图像。

        由上表知,特征重要性主要集中在common_ts_{minute,hour,day},x{1,2,4,5,7},key{1,2,3,6},eid_{eid,mean,freq},据此提取出各组特征重要性较高的特征

正则化

# 正则化
X = train_data.drop(['udmap', 'common_ts', 'uuid', 'target'], axis=1)
y = train_data['target']
lasso = LassoCV()
lasso.fit(X, y)
coef = pd.Series(lasso.coef_, index=X.columns)
print(coef)
print("Lasso算法挑选了 " + str(sum(coef != 0)) + " 个变量,然后去除掉了" + str(sum(coef == 0)) + "个变量")

imp_coef = coef.sort_values()
matplotlib.rcParams['figure.figsize'] = (8, 6)
imp_coef.plot(kind="barh")
plt.title("Lasso Model Feature Importance")
plt.show()

         通过正则化对特征相关性进行分析和排序,结果如下:

特征选择

        根据特征重要性可视化和正则化可视化结果,筛选出以下特征:

eid_std,eid_freq,eid_mean,x5,x5_freq,x5_mean,x7,x7_freq,x7_mean,key3,key3_freq,key3_mean,common_ts_minute,common_ts_hour,common_ts_day,udmap_isunknown,key2_freq,key6_freq,key1_freq,x4_freq,x6_freq,x2_freq,x3_std,x4_mean,key9_freq

        再次对以上特征进行特征重要性可视化:

 提交分数

        较特征进一步处理前有了约0.05分的提高。 

引用

【Python】Pandas/Sklearn进行机器学习之特征筛选,有效提升模型性能

系列文章(持续更新)

2023.8夏令营“用户新增预测”学习笔记(一)

2023.8夏令营“用户新增预测”学习笔记(二)

2023.8夏令营“用户新增预测”学习笔记(三)

2023.8夏令营“用户新增预测”学习笔记(四)   

2023.8夏令营“用户新增预测”学习笔记(六)

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值