kaggle编码categorical feature总结

这篇文章讲讲kaggle竞赛里categorical feature的常用处理套路,主要基于树模型(lightgbm,xgboost, etc.)。重点是target encoding 和 beta target encoding。

总结:

  • label encoding
    • 特征存在内在顺序 (ordinal feature)
  • one hot encoding
    • 特征无内在顺序,category数量 < 4
  • target encoding (mean encoding, likelihood encoding, impact encoding)
    • 特征无内在顺序,category数量 > 4
  • beta target encoding
    • 特征无内在顺序,category数量 > 4, K-fold cross validation
  • 不做处理(模型自动编码)
    • CatBoost,lightgbm

 

1. Label encoding

对于一个有m个category的特征,经过label encoding以后,每个category会映射到0到m-1之间的一个数。label encoding适用于ordinal feature (特征存在内在顺序)。

代码:

# train -> training dataframe
# test -> test dataframe
# cat_cols -> categorical columns

for col in cat_cols:
    le = LabelEncoder()
    le.fit(np.concatenate([train[col], test[col]]))
    train[col] = le.transform(train[col])
    test[col] = le.transform(test[col])

2. One-hot encoding (OHE)

对于一个有m个category的特征,经过独热编码(OHE)处理后,会变为m个二元特征,每个特征对应于一个category。这m个二元特征互斥,每次只有一个激活。

独热编码解决了原始特征缺少内在顺序的问题,但是缺点是对于high-cardinality categorical feature (category数量很多),编码之后特征空间过大(此处可以考虑PCA降维),而且由于one-hot feature 比较unbalanced,树模型里每次的切分增益较小,树模型通常需要grow very deep才能得到不错的精度。因此OHE一般用于category数量 <4的情况。

参考:Using Categorical Data with One Hot Encoding

代码:

# train -> training dataframe
# test -> test dataframe
# cat_cols -> categorical columns

df = train.append(test).reset_index()
original_column = list(df.columns)
df = pd.get_dummies(df, columns = cat_cols, dummy_na = True)
new_column = [c for c in df.columns if c not in original_column ]

3. Target encoding (or likelihood encoding, impact encoding, mean encoding)

Target encoding 采用 target mean value (among each category) 来给categorical feature做编码。为了减少target variable leak,主流的方法是使用2 levels of cross-validation求出target mean,思路如下:

  • 把train data划分为20-folds (举例:infold: fold #2-20, out of fold: fold #1)
    • 将每一个 infold (fold #2-20) 再次划分为10-folds (举例:inner_infold: fold #2-10, Inner_oof: fold #1)
      • 计算 10-folds的 inner out of folds值 (举例:使用inner_infold #2-10 的target的均值,来作为inner_oof #1的预测值)
      • 对10个inner out of folds 值取平均,得到 inner_oof_mean
    • 计算oof_mean (举例:使用 infold #2-20的inner_oof_mean 来预测 out of fold #1的oof_mean
  • 将train data 的 oof_mean 映射到test data完成编码

参考: Likelihood encoding of categorical features

open source package category_encoders: scikit-learn-contrib/categorical-encoding

代码:

# This way we have randomness and are able to reproduce the behaviour within this cell.
np.random.seed(13)

def impact_coding(data, feature, target='y'):
    '''
    In this implementation we get the values and the dictionary as two different steps.
    This is just because initially we were ignoring the dictionary as a result variable.
    
    In this implementation the KFolds use shuffling. If you want reproducibility the cv 
    could be moved to a parameter.
    '''
    n_folds = 20
    n_inner_folds = 10
    impact_coded = pd.Series()
    
    oof_default_mean = data[target].mean() # Gobal mean to use by default (you could further tune this)
    kf = KFold(n_splits=n_folds, shuffle=True)
    oof_mean_cv = pd.DataFrame()
    split = 0
    for infold, oof in kf.split(data[feature]):

            kf_inner = KFold(n_splits=n_inner_folds, shuffle=True)
            inner_split = 0
            inner_oof_mean_cv = pd.DataFrame()
            oof_default_inner_mean = data.iloc[infold][target].mean()
            for infold_inner, oof_inner in kf_inner.split(data.iloc[infold]):
                # The mean to apply to the inner oof split (a 1/n_folds % based on the rest)
                oof_mean = data.iloc[infold_inner].groupby(by=feature)[target].mean()
                

                # Also populate mapping (this has all group -> mean for all inner CV folds)
                inner_oof_mean_cv = inner_oof_mean_cv.join(pd.DataFrame(oof_mean), rsuffix=inner_split, how='outer')
                inner_oof_mean_cv.fillna(value=oof_default_inner_mean, inplace=True)
                inner_split += 1

            # Also populate mapping
            oof_mean_cv = oof_mean_cv.join(pd.DataFrame(inner_oof_mean_cv), rsuffix=split, how='outer')
            oof_mean_cv.fillna(value=oof_default_mean, inplace=True)
            split += 1
            
            impact_coded = impact_coded.append(data.iloc[oof].apply(
                            lambda x: inner_oof_mean_cv.loc[x[feature]].mean()
                                      if x[feature] in inner_oof_mean_cv.index
                                      else oof_default_mean
                            , axis=1))

    return impact_coded, oof_mean_cv.mean(axis=1), oof_default_mean

# Apply the encoding to training and test data, and preserve the mapping
impact_coding_map = {}
for f in categorical_features:
    print("Impact coding for {}".format(f))
    train_data["impact_encoded_{}".format(f)], impact_coding_mapping, default_coding = impact_coding(train_data, f)
    impact_coding_map[f] = (impact_coding_mapping, default_coding)
    mapping, default_mean = impact_coding_map[f]
    test_data["impact_encoded_{}".format(f)] = test_data.apply(lambda x: mapping[x[f]]
                                                                         if x[f] in mapping
                                                                         else default_mean
                                                               , axis=1)

4. beta target encoding

我第一次看到这个方法是在kaggle竞赛Avito Demand Prediction Challenge 第14名的solution分享: 14th Place Solution: The Almost Golden Defenders

和target encoding 一样,beta target encoding 也采用 target mean value (among each category) 来给categorical feature做编码。不同之处在于,为了进一步减少target variable leak,beta target encoding发生在在5-fold CV内部,而不是在5-fold CV之前:

  • 把train data划分为5-folds (5-fold cross validation)
    • target encoding based on infold data
    • train model
    • get out of fold prediction

同时beta target encoding 加入了smoothing term,用 bayesian mean 来代替mean。Bayesian mean (Bayesian average) 的思路: 某一个category如果数据量较少(<N_min),noise就会比较大,需要补足数据,达到smoothing 的效果。补足数据值 = prior mean。N_min 是一个regularization term,N_min 越大,regularization效果越强。

参考:Beta Target Encoding

代码:

# train -> training dataframe
# test -> test dataframe
# N_min -> smoothing term, minimum sample size, if sample size is less than N_min, add up to N_min 
# target_col -> target column
# cat_cols -> categorical colums
# Step 1: fill NA in train and test dataframe

# Step 2: 5-fold CV (beta target encoding within each fold)

kf = KFold(n_splits=5, shuffle=True, random_state=0)
for i, (dev_index, val_index) in enumerate(kf.split(train.index.values)):
    # split data into dev set and validation set
    dev = train.loc[dev_index].reset_index(drop=True) 
    val = train.loc[val_index].reset_index(drop=True)
        
    feature_cols = []    
    for var_name in cat_cols:
        feature_name = f'{var_name}_mean'
        feature_cols.append(feature_name)
        
        prior_mean = np.mean(dev[target_col])
        stats = dev[[target_col, var_name]].groupby(var_name).agg(['sum', 'count'])[target_col].reset_index()           
   
        ### beta target encoding by Bayesian average for dev set 
        df_stats = pd.merge(dev[[var_name]], stats, how='left')
        df_stats['sum'].fillna(value = prior_mean, inplace = True)
        df_stats['count'].fillna(value = 1.0, inplace = True)
        N_prior = np.maximum(N_min - df_stats['count'].values, 0)   # prior parameters
        dev[feature_name] = (prior_mean * N_prior + df_stats['sum']) / (N_prior + df_stats['count']) # Bayesian mean

        ### beta target encoding by Bayesian average for val set
        df_stats = pd.merge(val[[var_name]], stats, how='left')
        df_stats['sum'].fillna(value = prior_mean, inplace = True)
        df_stats['count'].fillna(value = 1.0, inplace = True)
        N_prior = np.maximum(N_min - df_stats['count'].values, 0)   # prior parameters
        val[feature_name] = (prior_mean * N_prior + df_stats['sum']) / (N_prior + df_stats['count']) # Bayesian mean
        
        ### beta target encoding by Bayesian average for test set
        df_stats = pd.merge(test[[var_name]], stats, how='left')
        df_stats['sum'].fillna(value = prior_mean, inplace = True)
        df_stats['count'].fillna(value = 1.0, inplace = True)
        N_prior = np.maximum(N_min - df_stats['count'].values, 0)   # prior parameters
        test[feature_name] = (prior_mean * N_prior + df_stats['sum']) / (N_prior + df_stats['count']) # Bayesian mean
        
        # Bayesian mean is equivalent to adding N_prior data points of value prior_mean to the data set.
        del df_stats, stats
    # Step 3: train model (K-fold CV), get oof prediction

另外,对于target encoding和beta target encoding,不一定要用target mean (or bayesian mean),也可以用其他的统计值包括 medium, frqequency, mode, variance, skewness, and kurtosis -- 或任何与target有correlation的统计值。

 

5. 不做任何处理(模型自动编码)

  • XgBoost和Random Forest,不能直接处理categorical feature,必须先编码成为numerical feature。
  • lightgbm和CatBoost,可以直接处理categorical feature。
  • 3
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
kaggle教程,方案分析,竞赛资料,竞赛方案参考,kaggle教程,方案分析,竞赛资料,竞赛方案参考,kaggle教程,方案分析,竞赛资料,竞赛方案参考,kaggle教程,方案分析,竞赛资料,竞赛方案参考,kaggle教程,方案分析,竞赛资料,竞赛方案参考,kaggle教程,方案分析,竞赛资料,竞赛方案参考,kaggle教程,方案分析,竞赛资料,竞赛方案参考,kaggle教程,方案分析,竞赛资料,竞赛方案参考,kaggle教程,方案分析,竞赛资料,竞赛方案参考,kaggle教程,方案分析,竞赛资料,竞赛方案参考,kaggle教程,方案分析,竞赛资料,竞赛方案参考,kaggle教程,方案分析,竞赛资料,竞赛方案参考,kaggle教程,方案分析,竞赛资料,竞赛方案参考,kaggle教程,方案分析,竞赛资料,竞赛方案参考,kaggle教程,方案分析,竞赛资料,竞赛方案参考,kaggle教程,方案分析,竞赛资料,竞赛方案参考,kaggle教程,方案分析,竞赛资料,竞赛方案参考,kaggle教程,方案分析,竞赛资料,竞赛方案参考,kaggle教程,方案分析,竞赛资料,竞赛方案参考,kaggle教程,方案分析,竞赛资料,竞赛方案参考,kaggle教程,方案分析,竞赛资料,竞赛方案参考,kaggle教程,方案分析,竞赛资料,竞赛方案参考,kaggle教程,方案分析,竞赛资料,竞赛方案参考,kaggle教程,方案分析,竞赛资料,竞赛方案参考,

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值