用户新增预测挑战赛——打卡记录5

本文介绍了如何通过特征转换(如独热编码和组合特征)对用户新增预测数据进行预处理,使用LightGBM和DecisionTreeClassifier进行模型训练,发现可能存在过拟合问题,后续计划尝试其他模型。
摘要由CSDN通过智能技术生成

提示:文章写完后,目录可以自动生成,如何生成可参考右边的帮助文档


继续调整特征+模型优化

实现代码

import pandas as pd
import numpy as np

train_data = pd.read_csv('用户新增预测挑战赛公开数据/train.csv')
test_data = pd.read_csv('用户新增预测挑战赛公开数据/test.csv')

train_data['common_ts'] = pd.to_datetime(train_data['common_ts'], unit='ms')
test_data['common_ts'] = pd.to_datetime(test_data['common_ts'], unit='ms')

def udmap_onethot(d):
    v = np.zeros(9)
    if d == 'unknown':
        return v
    
    d = eval(d)
    for i in range(1, 10):
        if 'key' + str(i) in d:
            v[i-1] = d['key' + str(i)]
            
    return v

train_udmap_df = pd.DataFrame(np.vstack(train_data['udmap'].apply(udmap_onethot)))
test_udmap_df = pd.DataFrame(np.vstack(test_data['udmap'].apply(udmap_onethot)))

train_udmap_df.columns = ['key' + str(i) for i in range(1, 10)]
test_udmap_df.columns = ['key' + str(i) for i in range(1, 10)]

train_data = pd.concat([train_data, train_udmap_df], axis=1)
test_data = pd.concat([test_data, test_udmap_df], axis=1)

train_data['eid_freq'] = train_data['eid'].map(train_data['eid'].value_counts())
test_data['eid_freq'] = test_data['eid'].map(train_data['eid'].value_counts())
train_data['eid_mean'] = train_data['eid'].map(train_data.groupby('eid')['target'].mean())
test_data['eid_mean'] = test_data['eid'].map(train_data.groupby('eid')['target'].mean())

train_data['udmap_isunknown'] = (train_data['udmap'] == 'unknown').astype(int)
test_data['udmap_isunknown'] = (test_data['udmap'] == 'unknown').astype(int)

train_data['common_ts_hour'] = train_data['common_ts'].dt.hour
test_data['common_ts_hour'] = test_data['common_ts'].dt.hour

train_data['common_ts_minute'] = train_data['common_ts'].dt.minute
test_data['common_ts_minute'] = test_data['common_ts'].dt.minute

# 提取天数
train_data['common_ts_day'] = train_data['common_ts'].dt.day
test_data['common_ts_day'] = test_data['common_ts'].dt.day

# 计算特征 x1 在训练集中的频次(出现次数)以及在训练集中对应的目标列 target 的均值。
train_data['x1_freq'] = train_data['x1'].map(train_data['x1'].value_counts())
test_data['x1_freq'] = test_data['x1'].map(train_data['x1'].value_counts())
train_data['x1_mean'] = train_data['x1'].map(train_data.groupby('x1')['target'].mean())
test_data['x1_mean'] = test_data['x1'].map(train_data.groupby('x1')['target'].mean())

train_data['key7+key8+key9'] = train_data['key7']+train_data['key8']+train_data['key9']
test_data['key7+key8+key9'] = test_data['key7']+test_data['key8']+test_data['key9']
train_data['key1~key6'] = train_data['key1']+train_data['key2']+train_data['key3']+train_data['key4']+train_data['key5']+train_data['key6']
test_data['key1~key6'] = test_data['key1']+test_data['key2']+test_data['key3']+test_data['key4']+test_data['key5']+test_data['key6']

# 创建一个新的二进制特征,表示是否在8点到15点之间
train_data['is_in_working_hours'] = train_data['common_ts_hour'].between(8, 15).astype(int)
test_data['is_in_working_hours'] = test_data['common_ts_hour'].between(8, 15).astype(int)

# 创建一个新的二进制特征,表示是否小于等于5
train_data['is_x7_less_than_5'] = (train_data['x7']<=5).astype(int)
test_data['is_x7_less_than_5'] = (test_data['x7']<=5).astype(int)

train_data['is_common_ts_day_more_10'] = (train_data['common_ts_day']>=10).astype(int)
test_data['is_common_ts_day_more_10'] = (test_data['common_ts_day']>=10).astype(int)

train_data['is_common_ts_day_is_10'] = (train_data['common_ts_day']==10).astype(int)
test_data['is_common_ts_is_10'] = (test_data['common_ts_day']==10).astype(int)
train_data['is_common_ts_day_is_14'] = (train_data['common_ts_day']==14).astype(int)
test_data['is_common_ts_is_14'] = (test_data['common_ts_day']==14).astype(int)
train_data['is_common_ts_day_is_17'] = (train_data['common_ts_day']==17).astype(int)
test_data['is_common_ts_is_17'] = (test_data['common_ts_day']==17).astype(int)
train_data['eid_key1_mean'] = train_data['eid'].map(train_data.groupby('eid')['key1'].mean())

test_data['eid_key1_mean'] = test_data['eid'].map(train_data.groupby('eid')['key1'].mean())

train_data['eid_key2_mean'] = train_data['eid'].map(train_data.groupby('eid')['key2'].mean())
test_data['eid_key2_mean'] = test_data['eid'].map(train_data.groupby('eid')['key2'].mean())

train_data['eid_key3_mean'] = train_data['eid'].map(train_data.groupby('eid')['key3'].mean())
test_data['eid_key3_mean'] = test_data['eid'].map(train_data.groupby('eid')['key3'].mean())

train_data['eid_key4_mean'] = train_data['eid'].map(train_data.groupby('eid')['key4'].mean())
test_data['eid_key4_mean'] = test_data['eid'].map(train_data.groupby('eid')['key4'].mean())

train_data['eid_key5_mean'] = train_data['eid'].map(train_data.groupby('eid')['key5'].mean())
test_data['eid_key5_mean'] = test_data['eid'].map(train_data.groupby('eid')['key5'].mean())

train_data['eid_key6_mean'] = train_data['eid'].map(train_data.groupby('eid')['key6'].mean())
test_data['eid_key6_mean'] = test_data['eid'].map(train_data.groupby('eid')['key6'].mean())

train_data['eid_key7_mean'] = train_data['eid'].map(train_data.groupby('eid')['key7'].mean())
test_data['eid_key7_mean'] = test_data['eid'].map(train_data.groupby('eid')['key7'].mean())

train_data['eid_key8_mean'] = train_data['eid'].map(train_data.groupby('eid')['key8'].mean())
test_data['eid_key8_mean'] = test_data['eid'].map(train_data.groupby('eid')['key8'].mean())

train_data['eid_key9_mean'] = train_data['eid'].map(train_data.groupby('eid')['key9'].mean())
test_data['eid_key9_mean'] = test_data['eid'].map(train_data.groupby('eid')['key9'].mean())

# 独热编码
from itertools import combinations
# 获取x1到x8特征的所有两两组合
x1_to_x8_columns = ['x1', 'x2', 'x6', 'x7', 'x8']
combinations_list = list(combinations(x1_to_x8_columns, 2))

# 对每个组合进行独热编码
for feature1, feature2 in combinations_list:
    combined_feature_name = f'{feature1}_{feature2}'  # 新特征的名称
    combined_feature = pd.get_dummies(train_data[[feature1, feature2]], columns=[feature1, feature2], prefix=[feature1, feature2])
    train_data = pd.concat([train_data, combined_feature], axis=1)
    train_data.rename(columns={col: f'{combined_feature_name}_{col}' for col in combined_feature.columns}, inplace=True)

# 独热编码
from itertools import combinations
# 获取x1到x8特征的所有两两组合
x1_to_x8_columns = ['x1', 'x2', 'x6', 'x7', 'x8']
combinations_list = list(combinations(x1_to_x8_columns, 2))

# 对每个组合进行独热编码
for feature1, feature2 in combinations_list:
    combined_feature_name = f'{feature1}_{feature2}'  # 新特征的名称
    combined_feature = pd.get_dummies(test_data[[feature1, feature2]], columns=[feature1, feature2], prefix=[feature1, feature2])
    test_data = pd.concat([test_data, combined_feature], axis=1)
    test_data.rename(columns={col: f'{combined_feature_name}_{col}' for col in combined_feature.columns}, inplace=True)

import lightgbm as lgb
from sklearn.model_selection import cross_val_predict
from sklearn.metrics import classification_report
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import f1_score

# 训练并验证DecisionTreeClassifier
X = train_data.drop(['udmap', 'common_ts', 'uuid', 'target', 'key1', 'key2', 'key3', 'key4', 'key5', 'key6', 'key7', 'key8', 'key9'], axis=1)
y = train_data['target']

clf = DecisionTreeClassifier()

pred = cross_val_predict(clf, X, y)

report = classification_report(y, pred, digits=3)

print(report)

在这里插入图片描述

总结

线下测试0.8+,线上只有0.75+。可能模型有些过拟合了。
另外,目前还没有尝试更换其它模型,后续可以进行其它模型尝试。

好的,以下是创建打卡任务与打卡记录关联表的 SQL 代码: ```sql CREATE TABLE `task_record` ( `id` int(11) NOT NULL AUTO_INCREMENT COMMENT '记录ID', `task_id` int(11) NOT NULL COMMENT '任务ID', `user_id` int(11) NOT NULL COMMENT '用户ID', `checkin_time` datetime NOT NULL COMMENT '打卡时间', PRIMARY KEY (`id`), UNIQUE KEY `task_record_task_user_index` (`task_id`,`user_id`), CONSTRAINT `task_record_task_fk` FOREIGN KEY (`task_id`) REFERENCES `task` (`id`) ON DELETE CASCADE ON UPDATE CASCADE, CONSTRAINT `task_record_user_fk` FOREIGN KEY (`user_id`) REFERENCES `user` (`id`) ON DELETE CASCADE ON UPDATE CASCADE ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COMMENT='打卡任务与打卡记录关联表'; ``` 该表包括以下字段: - `id`:记录 ID,自增长整数,用于唯一标识每条记录。 - `task_id`:任务 ID,整数类型,用于关联打卡记录与对应的打卡任务。 - `user_id`:用户 ID,整数类型,用于关联打卡记录打卡用户。 - `checkin_time`:打卡时间,日期时间类型,用于记录用户打卡的时间。 以上 SQL 代码创建了一个名为 `task_record` 的打卡任务与打卡记录关联表,用于记录用户对于某个打卡任务的打卡记录,并与对应的打卡任务和用户进行关联。同时,为了保证每个用户只能对同一个任务打卡一次,创建了一个唯一索引 `task_record_task_user_index` 来限制同一用户对同一任务的重复打卡记录。还创建了两个外键 `task_record_task_fk` 和 `task_record_user_fk` 分别关联 `task` 表和 `user` 表,以保证数据的完整性和一致性。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值