kaggle(04)---avazu_ctr_predictor(baseline)

比赛的目的:

  • 通过分析网上的系统日志和用户行为信息,来预测某些网页上项目的点击率。
  • 是一个二分类的问题,只需要预测出用户是否点击即可
  • 最好能够输出某个概率,比如:用户点击某个广告的概率。
    比赛官网

文件信息:

train - Training set. 10 days of click-through data, ordered chronologically. Non-clicks and clicks are subsampled according to different strategies.

test - Test set. 1 day of ads to for testing your model predictions.

sampleSubmission.csv - Sample submission file in the correct format, corresponds to the All-0.5 Benchmark.

属性信息:

  • id: ad identifier
  • click: 0/1 for non-click/click
  • hour: format is YYMMDDHH, so 14091123 means 23:00 on Sept. 11, 2014 UTC.
  • C1 – anonymized categorical variable
  • banner_pos
  • site_id
  • site_domain
  • site_category
  • app_id
  • app_domain
  • app_category
  • device_id
  • device_ip
  • device_model
  • device_type
  • device_conn_type
  • C14-C21 – anonymized categorical variables

初步分析:

  • 这是一个点击率预测的问题,是一个二分类的问题
  • 通过初步查看给出的属性,主要分为用户,网站,广告和时间四种类型的属性
  • 时间应该是一个重要的属性,可以好好分析,因为每个人在不同时间喜欢看不同的东西
  • 网站类型也是一个和用户相关性比较大的属性
  • 设备类型可以反映出用户的一个经济范围和消费水平
  • 等等!肯定还有很多相关性在这些属性中,我们应该设身处地的思考这些问题。

Load Data

import pandas as pd


# Initial setup
train_filename = "train_small.csv"  #由于原始数据量比较多,所以这里先导入一个经过下采样的样本
test_filename = "test.csv"
submission_filename = "submit.csv"

training_set = pd.read_csv(train_filename)

Explore Data

training_set.shape
(99999, 24)
#我们首先看看数据的样子
training_set.head(10)
idclickhourC1banner_possite_idsite_domainsite_categoryapp_idapp_domain...device_typedevice_conn_typeC14C15C16C17C18C19C20C21
01.000009e+18014102100100501fbe01fef384576728905ebdecad23867801e8d9...1215706320501722035-179
11.000017e+19014102100100501fbe01fef384576728905ebdecad23867801e8d9...101570432050172203510008479
21.000037e+19014102100100501fbe01fef384576728905ebdecad23867801e8d9...101570432050172203510008479
31.000064e+19014102100100501fbe01fef384576728905ebdecad23867801e8d9...101570632050172203510008479
41.000068e+1901410210010051fe8cc4489166c1610569f928ecad23867801e8d9...1018993320502161035-1157
51.000072e+1901410210010050d6137915bb1ef334f028772becad23867801e8d9...10169203205018990431100077117
61.000072e+19014102100100508fda644b25d4cfcdf028772becad23867801e8d9...1020362320502333039-1157
71.000092e+1901410210010051e151e2457e091613f028772becad23867801e8d9...1020632320502374339-123
81.000095e+19114102100100501fbe01fef384576728905ebdecad23867801e8d9...1215707320501722035-179
91.000126e+190141021001002084c7ba46c4e18dd650e219e0ecad23867801e8d9...0021689320502496316710019123

10 rows × 24 columns

  • 目前主要有22个属性,其中有很多是类别的属性。
  • 训练集总共有99999个样本,还行,不多也不少。
training_set.info()
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 99999 entries, 0 to 99998
Data columns (total 24 columns):
id                  99999 non-null float64
click               99999 non-null int64
hour                99999 non-null int64
C1                  99999 non-null int64
banner_pos          99999 non-null int64
site_id             99999 non-null object
site_domain         99999 non-null object
site_category       99999 non-null object
app_id              99999 non-null object
app_domain          99999 non-null object
app_category        99999 non-null object
device_id           99999 non-null object
device_ip           99999 non-null object
device_model        99999 non-null object
device_type         99999 non-null int64
device_conn_type    99999 non-null int64
C14                 99999 non-null int64
C15                 99999 non-null int64
C16                 99999 non-null int64
C17                 99999 non-null int64
C18                 99999 non-null int64
C19                 99999 non-null int64
C20                 99999 non-null int64
C21                 99999 non-null int64
dtypes: float64(1), int64(14), object(9)
memory usage: 18.3+ MB
  • 因为是处理好的,所以数据比较完整,没有缺失值,这为我们省去很多的工作
  • 数据中很多属性是类别的,需要进行编码处理
  • 数值型的数据取值都是int64,但是还是需要看看数据范围是否一致,不然还要归一化处理。
  • 接下来看一下数值型的数据的一个分布情况
#查看训练集
training_set.describe()
idclickhourC1banner_posdevice_typedevice_conn_typeC14C15C16C17C18C19C20C21
count9.999900e+0499999.00000099999.099999.00000099999.00000099999.00000099999.00000099999.00000099999.00000099999.00000099999.00000099999.00000099999.00000099999.00000099999.000000
mean9.500834e+180.17490214102100.01005.0344400.1983021.0557410.19927217682.106071318.33394356.8189881964.0290900.789328131.73544737874.60636688.555386
std5.669435e+180.3798850.01.0887050.4026410.5839860.6352713237.72695611.93199836.924283394.9611291.223747244.07781648546.36929945.482979
min3.237563e+130.00000014102100.01001.0000000.0000000.0000000.000000375.000000120.00000020.000000112.0000000.00000033.000000-1.00000013.000000
25%4.183306e+180.00000014102100.01005.0000000.0000001.0000000.00000015704.000000320.00000050.0000001722.0000000.00000035.000000-1.00000061.000000
50%1.074496e+190.00000014102100.01005.0000000.0000001.0000000.00000017654.000000320.00000050.0000001993.0000000.00000035.000000-1.00000079.000000
75%1.457544e+190.00000014102100.01005.0000000.0000001.0000000.00000020362.000000320.00000050.0000002306.0000002.00000039.000000100083.000000156.000000
max1.844670e+191.00000014102100.01010.0000005.0000005.0000005.00000021705.000000728.000000480.0000002497.0000003.0000001835.000000100248.000000157.000000
  • 数值型数据取值范围相差较大,后面需要对其进行归一化处理。
# id: ad identifier
# click: 0/1 for non-click/click
# hour: format is YYMMDDHH, so 14091123 means 23:00 on Sept. 11, 2014 UTC.
# C1 -- anonymized categorical variable
# banner_pos
# site_id
# site_domain
# site_category
# app_id
# app_domain
# app_category
# device_id
# device_ip
# device_model
# device_type
# device_conn_type
# C14-C21 -- anonymized categorical variables
from sklearn.externals import joblib
from sklearn.cross_validation import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn import metrics

from utils import load_df
E:\Anaconda2\soft\lib\site-packages\sklearn\cross_validation.py:41: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. Also note that the interface of the new CV iterators are different from that of this module. This module will be removed in 0.20.
  "This module will be removed in 0.20.", DeprecationWarning)
# 结果衡量
def print_metrics(true_values, predicted_values):
    print "Accuracy: ", metrics.accuracy_score(true_values, predicted_values)
    print "AUC: ", metrics.roc_auc_score(true_values, predicted_values)
    print "Confusion Matrix: ", + metrics.confusion_matrix(true_values, predicted_values)
    print metrics.classification_report(true_values, predicted_values)

# 拟合分类器
def classify(classifier_class, train_input, train_targets):
    classifier_object = classifier_class()
    classifier_object.fit(train_input, train_targets)
    return classifier_object

# 模型存储
def save_model(clf):
    joblib.dump(clf, 'classifier.pkl')
train_data = load_df('train_small.csv').values
train_data.shape #数据量还是99999个
(99999L, 14L)
train_data[:,:]
array([[       0, 14102100,     1005, ...,       35,       -1,       79],
       [       0, 14102100,     1005, ...,       35,   100084,       79],
       [       0, 14102100,     1005, ...,       35,   100084,       79],
       ...,
       [       0, 14102100,     1005, ...,       35,       -1,       79],
       [       1, 14102100,     1005, ...,       35,       -1,       79],
       [       0, 14102100,     1005, ...,       35,       -1,       79]],
      dtype=int64)

先训练一个baseline看看,说起baseline当然选用工业界认同的baseline模型LR

# 训练和存储模型
X_train, X_test, y_train, y_test = train_test_split(train_data[0::, 1::], train_data[0::, 0],
                                                    test_size=0.3, random_state=0)

classifier = classify(LogisticRegression, X_train, y_train) #使用LR模型
predictions = classifier.predict(X_test)
print_metrics(y_test, predictions)  #通过多种评价指标对分类的模型进行评判
save_model(classifier)  #保存模型
Accuracy:  0.8233
AUC:  0.5
Confusion Matrix:  [[24699     0]
 [ 5301     0]]
             precision    recall  f1-score   support

          0       0.82      1.00      0.90     24699
          1       0.00      0.00      0.00      5301

avg / total       0.68      0.82      0.74     30000



E:\Anaconda2\soft\lib\site-packages\sklearn\metrics\classification.py:1135: UndefinedMetricWarning: Precision and F-score are ill-defined and being set to 0.0 in labels with no predicted samples.
  'precision', 'predicted', average, warn_for)

从baseline的结果,我们可以得出如下几点结论:

  • 将结果全部预测为没有点击后的准确率可以达到82.33%,这显然是不对的
  • 从混淆矩阵可以看出原本为点击的结果全部预测为了不点击,猜想的原因可能是样布不均衡问题导致的。因为毕竟广告点击的较少,数据中大部分的数据的标签都是没有点击的,这会导致模型偏向于去预测不点击
  • 从实验结果可以发现准确率有时候非常不准,对于模型的状态预判。
#样本中未点击的样本数占总体样本的83%多,这和我们分析的原因是一样的,样本非常不均衡。
training_set[training_set["click"] == 0].count()[0] * 1.0  / training_set.shape[0]
0.8250982509825098
# 按照指定的格式生成结果
def create_submission(ids, predictions, filename='submission.csv'):
    submissions = np.concatenate((ids.reshape(len(ids), 1), predictions.reshape(len(predictions), 1)), axis=1)
    df = DataFrame(submissions)
    df.to_csv(filename, header=['id', 'click'], index=False)
import numpy as np
from pandas import DataFrame

classifier = joblib.load('classifier.pkl')
test_data_df = load_df('test.csv', training=False)
ids = test_data_df.values[0:, 0]
predictions = classifier.predict(test_data_df.values[0:, 1:])
create_submission(ids, predictions)

评论 7
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值