light GBM的使用

目录

载入数据

参数设置

训练模型

交叉验证

Early Stop

预测

GOSS

EFB


light GBM是微软开源的一种使用基于树的学习算法的梯度提升框架。

 文档地址:官方文档

源码地址:github

中文文档地址:中文文档

论文地址:lightgbm-a-highly-efficient-gradient-boosting-decision-tree

参考博客:lightgbm,xgboost,gbdt的区别与联系 - Mata - 博客园

                  LightGBM原理之论文详解 - u010242233的博客 - CSDN博客

载入数据

LGB可以load以下类型数据。

  • libsvm/tsv/csv/txt format file
  • NumPy 2D array(s), pandas DataFrame, H2O DataTable’s Frame, SciPy sparse matrix
  • LightGBM binary file

load libsvm text file or a LightGBM binary file

train_data = lgb.Dataset('train.svm.bin')

load a numpy array

data = np.random.rand(500, 10)  # 500 entities, each contains 10 features
label = np.random.randint(2, size=500)  # binary target
train_data = lgb.Dataset(data, label=label)

load a scpiy.sparse.csr_matrix array

csr = scipy.sparse.csr_matrix((dat, (row, col)))
train_data = lgb.Dataset(csr)

Saving Dataset into a LightGBM binary file(可以提高运行速度)

train_data = lgb.Dataset('train.svm.txt')
train_data.save_binary('train.bin')

Create validation data:

validation_data = train_data.create_valid('validation.svm')

在构建数据集时,需要把类别转化为整型数。同时设置free_raw_data=True( 默认是true).

参数设置

一般会初始化权重

w = np.random.rand(500, )
train_data = lgb.Dataset(data, label=label, weight=w)

booster参数

param = {'num_leaves':31, 'num_trees':100, 'objective':'binary'}
param['metric'] = 'auc' or 'binary_logloss'

核心参数

 

  • config ?︎, default = "", type = string, aliases: config_file
    • path of config file
    • Note: can be used only in CLI version
  • task ?︎, default = train, type = enum, options: trainpredictconvert_modelrefit, aliases: task_type
    • train, for training, aliases: training
    • predict, for prediction, aliases: predictiontest
    • convert_model, for converting model file into if-else format, see more information in IO Parameters
    • refit, for refitting existing models with new data, aliases: refit_tree
    • Note: can be used only in CLI version; for language-specific packages you can use the correspondent functions
  • objective ?︎, default = regression, type = enum, options: regressionregression_l1huberfairpoissonquantilemapegammmatweediebinarymulticlassmulticlassovaxentropyxentlambdalambdarank, aliases: objective_typeappapplication
    • regression application
      • regression_l2, L2 loss, aliases: regressionmean_squared_errormsel2_rootroot_mean_squared_errorrmse
      • regression_l1, L1 loss, aliases: mean_absolute_errormae
      • huberHuber loss
      • fairFair loss
      • poissonPoisson regression
      • quantileQuantile regression
      • mapeMAPE loss, aliases: mean_absolute_percentage_error
      • gamma, Gamma regression with log-link. It might be useful, e.g., for modeling insurance claims severity, or for any target that might be gamma-distributed
      • tweedie, Tweedie regression with log-link. It might be useful, e.g., for modeling total loss in insurance, or for any target that might be tweedie-distributed
    • binary, binary log loss classification (or logistic regression). Requires labels in {0, 1}; see cross-entropy application for general probability labels in [0, 1]
    • multi-class classification application
      • multiclasssoftmax objective function, aliases: softmax
      • multiclassovaOne-vs-All binary objective function, aliases: multiclass_ovaovaovr
      • num_class should be set as well
    • cross-entropy application
      • xentropy, objective function for cross-entropy (with optional linear weights), aliases: cross_entropy
      • xentlambda, alternative parameterization of cross-entropy, aliases: cross_entropy_lambda
      • label is anything in interval [0, 1]
    • lambdaranklambdarank application
      • label should be int type in lambdarank tasks, and larger number represents the higher relevance (e.g. 0:bad, 1:fair, 2:good, 3:perfect)
      • label_gain can be used to set the gain (weight) of int label
      • all values in label must be smaller than number of elements in label_gain
  • boosting ?︎, default = gbdt, type = enum, options: gbdtgbrtrfrandom_forestdartgoss, aliases: boosting_typeboost
  • data ?︎, default = "", type = string, aliases: traintrain_datatrain_data_filedata_filename
    • path of training data, LightGBM will train from this data
    • Note: can be used only in CLI version
  • valid ?︎, default = "", type = string, aliases: testvalid_datavalid_data_filetest_datatest_data_filevalid_filenames
    • path(s) of validation/test data, LightGBM will output metrics for these data
    • support multiple validation data, separated by ,
    • Note: can be used only in CLI version
  • num_iterations ?︎, default = 100, type = int, aliases: num_iterationn_iternum_treenum_treesnum_roundnum_roundsnum_boost_roundn_estimators, constraints: num_iterations >= 0
    • number of boosting iterations
    • Note: internally, LightGBM constructs num_class * num_iterations trees for multi-class classification problems
  • learning_rate ?︎, default = 0.1, type = double, aliases: shrinkage_rateeta, constraints: learning_rate > 0.0
    • shrinkage rate
    • in dart, it also affects on normalization weights of dropped trees
  • num_leaves ?︎, default = 31, type = int, aliases: num_leafmax_leavesmax_leaf, constraints: num_leaves > 1
    • max number of leaves in one tree
  • tree_learner ?︎, default = serial, type = enum, options: serialfeaturedatavoting, aliases: treetree_typetree_learner_type
    • serial, single machine tree learner
    • feature, feature parallel tree learner, aliases: feature_parallel
    • data, data parallel tree learner, aliases: data_parallel
    • voting, voting parallel tree learner, aliases: voting_parallel
    • refer to Parallel Learning Guide to get more details
  • num_threads ?︎, default = 0, type = int, aliases: num_threadnthreadnthreadsn_jobs
    • number of threads for LightGBM
    • 0 means default number of threads in OpenMP
    • for the best speed, set this to the number of real CPU cores, not the number of threads (most CPUs use hyper-threading to generate 2 threads per CPU core)
    • do not set it too large if your dataset is small (for instance, do not use 64 threads for a dataset with 10,000 rows)
    • be aware a task manager or any similar CPU monitoring tool might report that cores not being fully utilized. This is normal
    • for parallel learning, do not use all CPU cores because this will cause poor performance for the network communication
  • device_type ?︎, default = cpu, type = enum, options: cpugpu, aliases: device
    • device for the tree learning, you can use GPU to achieve the faster learning
    • Note: it is recommended to use the smaller max_bin (e.g. 63) to get the better speed up
    • Note: for the faster speed, GPU uses 32-bit float point to sum up by default, so this may affect the accuracy for some tasks. You can set gpu_use_dp=true to enable 64-bit float point, but it will slow down the training
    • Note: refer to Installation Guide to build LightGBM with GPU support
  • seed ?︎, default = None, type = int, aliases: random_seedrandom_state
    • this seed is used to generate other seeds, e.g. data_random_seedfeature_fraction_seed, etc.
    • by default, this seed is unused in favor of default values of other seeds
    • this seed has lower priority in comparison with other seeds, which means that it will be overridden, if you set other seeds explicitly

learning control参数

 

  • max_depth ?︎, default = -1, type = int
    • limit the max depth for tree model. This is used to deal with over-fitting when #data is small. Tree still grows leaf-wise
    • < 0 means no limit
  • min_data_in_leaf ?︎, default = 20, type = int, aliases: min_data_per_leafmin_datamin_child_samples, constraints: min_data_in_leaf >= 0
    • minimal number of data in one leaf. Can be used to deal with over-fitting
  • min_sum_hessian_in_leaf ?︎, default = 1e-3, type = double, aliases: min_sum_hessian_per_leafmin_sum_hessianmin_hessianmin_child_weight, constraints: min_sum_hessian_in_leaf >= 0.0
    • minimal sum hessian in one leaf. Like min_data_in_leaf, it can be used to deal with over-fitting
  • bagging_fraction ?︎, default = 1.0, type = double, aliases: sub_rowsubsamplebagging, constraints: 0.0 < bagging_fraction <= 1.0
    • like feature_fraction, but this will randomly select part of data without resampling
    • can be used to speed up training
    • can be used to deal with over-fitting
    • Note: to enable bagging, bagging_freq should be set to a non zero value as well
  • bagging_freq ?︎, default = 0, type = int, aliases: subsample_freq
    • frequency for bagging
    • 0 means disable bagging; k means perform bagging at every k iteration
    • Note: to enable bagging, bagging_fraction should be set to value smaller than 1.0 as well
  • bagging_seed ?︎, default = 3, type = int, aliases: bagging_fraction_seed
    • random seed for bagging
  • feature_fraction ?︎, default = 1.0, type = double, aliases: sub_featurecolsample_bytree, constraints: 0.0 < feature_fraction <= 1.0
    • LightGBM will randomly select part of features on each iteration if feature_fraction smaller than 1.0. For example, if you set it to 0.8, LightGBM will select 80% of features before training each tree
    • can be used to speed up training
    • can be used to deal with over-fitting
  • feature_fraction_seed ?︎, default = 2, type = int
    • random seed for feature_fraction
  • early_stopping_round ?︎, default = 0, type = int, aliases: early_stopping_roundsearly_stopping
    • will stop training if one metric of one validation data doesn’t improve in last early_stopping_round rounds
    • <= 0 means disable
  • max_delta_step ?︎, default = 0.0, type = double, aliases: max_tree_outputmax_leaf_output
    • used to limit the max output of tree leaves
    • <= 0 means no constraint
    • the final max output of leaves is learning_rate * max_delta_step
  • lambda_l1 ?︎, default = 0.0, type = double, aliases: reg_alpha, constraints: lambda_l1 >= 0.0
    • L1 regularization
  • lambda_l2 ?︎, default = 0.0, type = double, aliases: reg_lambdalambda, constraints: lambda_l2 >= 0.0
    • L2 regularization
  • min_gain_to_split ?︎, default = 0.0, type = double, aliases: min_split_gain, constraints: min_gain_to_split >= 0.0
    • the minimal gain to perform split
  • drop_rate ?︎, default = 0.1, type = double, aliases: rate_drop, constraints: 0.0 <= drop_rate <= 1.0
    • used only in dart
    • dropout rate: a fraction of previous trees to drop during the dropout
  • max_drop ?︎, default = 50, type = int
    • used only in dart
    • max number of dropped trees during one boosting iteration
    • <=0 means no limit
  • skip_drop ?︎, default = 0.5, type = double, constraints: 0.0 <= skip_drop <= 1.0
    • used only in dart
    • probability of skipping the dropout procedure during a boosting iteration
  • xgboost_dart_mode ?︎, default = false, type = bool
    • used only in dart
    • set this to true, if you want to use xgboost dart mode
  • uniform_drop ?︎, default = false, type = bool
    • used only in dart
    • set this to true, if you want to use uniform drop
  • drop_seed ?︎, default = 4, type = int
    • used only in dart
    • random seed to choose dropping models
  • top_rate ?︎, default = 0.2, type = double, constraints: 0.0 <= top_rate <= 1.0
    • used only in goss
    • the retain ratio of large gradient data
  • other_rate ?︎, default = 0.1, type = double, constraints: 0.0 <= other_rate <= 1.0
    • used only in goss
    • the retain ratio of small gradient data
  • min_data_per_group ?︎, default = 100, type = int, constraints: min_data_per_group > 0
    • minimal number of data per categorical group
  • max_cat_threshold ?︎, default = 32, type = int, constraints: max_cat_threshold > 0
    • used for the categorical features
    • limit the max threshold points in categorical features
  • cat_l2 ?︎, default = 10.0, type = double, constraints: cat_l2 >= 0.0
    • used for the categorical features
    • L2 regularization in categorcial split
  • cat_smooth ?︎, default = 10.0, type = double, constraints: cat_smooth >= 0.0
    • used for the categorical features
    • this can reduce the effect of noises in categorical features, especially for categories with few data
  • max_cat_to_onehot ?︎, default = 4, type = int, constraints: max_cat_to_onehot > 0
    • when number of categories of one feature smaller than or equal to max_cat_to_onehot, one-vs-other split algorithm will be used
  • top_k ?︎, default = 20, type = int, aliases: topk, constraints: top_k > 0
    • used in Voting parallel
    • set this to larger value for more accurate result, but it will slow down the training speed
  • monotone_constraints ?︎, default = None, type = multi-int, aliases: mcmonotone_constraint
    • used for constraints of monotonic features
    • 1 means increasing, -1 means decreasing, 0 means non-constraint
    • you need to specify all features in order. For example, mc=-1,0,1 means decreasing for 1st feature, non-constraint for 2nd feature and increasing for the 3rd feature
  • feature_contri ?︎, default = None, type = multi-double, aliases: feature_contribfcfpfeature_penalty
    • used to control feature’s split gain, will use gain[i] = max(0, feature_contri[i]) * gain[i] to replace the split gain of i-th feature
    • you need to specify all features in order
  • forcedsplits_filename ?︎, default = "", type = string, aliases: fsforced_splits_filenameforced_splits_fileforced_splits
    • path to a .json file that specifies splits to force at the top of every decision tree before best-first learning commences
    • .json file can be arbitrarily nested, and each split contains featurethreshold fields, as well as left and right fields representing subsplits
    • categorical splits are forced in a one-hot fashion, with left representing the split containing the feature value and rightrepresenting other values
    • Note: the forced split logic will be ignored, if the split makes gain worse
    • see this file as an example
  • refit_decay_rate ?︎, default = 0.9, type = double, constraints: 0.0 <= refit_decay_rate <= 1.0
    • decay rate of refit task, will use leaf_output = refit_decay_rate * old_leaf_output + (1.0 - refit_decay_rate) * new_leaf_output to refit trees
    • used only in refit task in CLI version or as argument in refit function in language-specific package

IO参数

  • verbosity ?︎, default = 1, type = int, aliases: verbose
    • controls the level of LightGBM’s verbosity
    • < 0: Fatal, = 0: Error (Warning), = 1: Info, > 1: Debug
  • max_bin ?︎, default = 255, type = int, constraints: max_bin > 1
    • max number of bins that feature values will be bucketed in
    • small number of bins may reduce training accuracy but may increase general power (deal with over-fitting)
    • LightGBM will auto compress memory according to max_bin. For example, LightGBM will use uint8_t for feature value if max_bin=255
  • min_data_in_bin ?︎, default = 3, type = int, constraints: min_data_in_bin > 0
    • minimal number of data inside one bin
    • use this to avoid one-data-one-bin (potential over-fitting)
  • bin_construct_sample_cnt ?︎, default = 200000, type = int, aliases: subsample_for_bin, constraints: bin_construct_sample_cnt > 0
    • number of data that sampled to construct histogram bins
    • setting this to larger value will give better training result, but will increase data loading time
    • set this to larger value if data is very sparse
  • histogram_pool_size ?︎, default = -1.0, type = double, aliases: hist_pool_size
    • max cache size in MB for historical histogram
    • < 0 means no limit
  • data_random_seed ?︎, default = 1, type = int, aliases: data_seed
    • random seed for data partition in parallel learning (excluding the feature_parallel mode)
  • output_model ?︎, default = LightGBM_model.txt, type = string, aliases: model_outputmodel_out
    • filename of output model in training
    • Note: can be used only in CLI version
  • snapshot_freq ?︎, default = -1, type = int, aliases: save_period
    • frequency of saving model file snapshot
    • set this to positive value to enable this function. For example, the model file will be snapshotted at each iteration if snapshot_freq=1
    • Note: can be used only in CLI version
  • input_model ?︎, default = "", type = string, aliases: model_inputmodel_in
    • filename of input model
    • for prediction task, this model will be applied to prediction data
    • for train task, training will be continued from this model
    • Note: can be used only in CLI version
  • output_result ?︎, default = LightGBM_predict_result.txt, type = string, aliases: predict_resultprediction_resultpredict_nameprediction_namepred_namename_pred
    • filename of prediction result in prediction task
    • Note: can be used only in CLI version
  • initscore_filename ?︎, default = "", type = string, aliases: init_score_filenameinit_score_fileinit_scoreinput_init_score
    • path of file with training initial scores
    • if "", will use train_data_file + .init (if exists)
    • Note: works only in case of loading data directly from file
  • valid_data_initscores ?︎, default = "", type = string, aliases: valid_data_init_scoresvalid_init_score_filevalid_init_score
    • path(s) of file(s) with validation initial scores
    • if "", will use valid_data_file + .init (if exists)
    • separate by , for multi-validation data
    • Note: works only in case of loading data directly from file
  • pre_partition ?︎, default = false, type = bool, aliases: is_pre_partition
    • used for parallel learning (excluding the feature_parallel mode)
    • true if training data are pre-partitioned, and different machines use different partitions
  • enable_bundle ?︎, default = true, type = bool, aliases: is_enable_bundlebundle
  • max_conflict_rate ?︎, default = 0.0, type = double, constraints: 0.0 <= max_conflict_rate < 1.0
    • max conflict rate for bundles in EFB
    • set this to 0.0 to disallow the conflict and provide more accurate results
    • set this to a larger value to achieve faster speed
  • is_enable_sparse ?︎, default = true, type = bool, aliases: is_sparseenable_sparsesparse
    • used to enable/disable sparse optimization
  • sparse_threshold ?︎, default = 0.8, type = double, constraints: 0.0 < sparse_threshold <= 1.0
    • the threshold of zero elements percentage for treating a feature as a sparse one
  • use_missing ?︎, default = true, type = bool
    • set this to false to disable the special handle of missing value
  • zero_as_missing ?︎, default = false, type = bool
    • set this to true to treat all zero as missing values (including the unshown values in libsvm/sparse matrices)
    • set this to false to use na for representing missing values
  • two_round ?︎, default = false, type = bool, aliases: two_round_loadinguse_two_round_loading
    • set this to true if data file is too big to fit in memory
    • by default, LightGBM will map data file to memory and load features from memory. This will provide faster data loading speed, but may cause run out of memory error when the data file is very big
    • Note: works only in case of loading data directly from file
  • save_binary ?︎, default = false, type = bool, aliases: is_save_binaryis_save_binary_file
    • if true, LightGBM will save the dataset (including validation data) to a binary file. This speed ups the data loading for the next time
    • Note: can be used only in CLI version; for language-specific packages you can use the correspondent function
  • header ?︎, default = false, type = bool, aliases: has_header
    • set this to true if input data has header
    • Note: works only in case of loading data directly from file
  • label_column ?︎, default = "", type = int or string, aliases: label
    • used to specify the label column
    • use number for index, e.g. label=0 means column_0 is the label
    • add a prefix name: for column name, e.g. label=name:is_click
    • Note: works only in case of loading data directly from file
  • weight_column ?︎, default = "", type = int or string, aliases: weight
    • used to specify the weight column
    • use number for index, e.g. weight=0 means column_0 is the weight
    • add a prefix name: for column name, e.g. weight=name:weight
    • Note: works only in case of loading data directly from file
    • Note: index starts from 0 and it doesn’t count the label column when passing type is int, e.g. when label is column_0, and weight is column_1, the correct parameter is weight=0
  • group_column ?︎, default = "", type = int or string, aliases: groupgroup_idquery_columnqueryquery_id
    • used to specify the query/group id column
    • use number for index, e.g. query=0 means column_0 is the query id
    • add a prefix name: for column name, e.g. query=name:query_id
    • Note: works only in case of loading data directly from file
    • Note: data should be grouped by query_id
    • Note: index starts from 0 and it doesn’t count the label column when passing type is int, e.g. when label is column_0 and query_id is column_1, the correct parameter is query=0
  • ignore_column ?︎, default = "", type = multi-int or string, aliases: ignore_featureblacklist
    • used to specify some ignoring columns in training
    • use number for index, e.g. ignore_column=0,1,2 means column_0, column_1 and column_2 will be ignored
    • add a prefix name: for column name, e.g. ignore_column=name:c1,c2,c3 means c1, c2 and c3 will be ignored
    • Note: works only in case of loading data directly from file
    • Note: index starts from 0 and it doesn’t count the label column when passing type is int
    • Note: despite the fact that specified columns will be completely ignored during the training, they still should have a valid format allowing LightGBM to load file successfully
  • categorical_feature ?︎, default = "", type = multi-int or string, aliases: cat_featurecategorical_columncat_column
    • used to specify categorical features
    • use number for index, e.g. categorical_feature=0,1,2 means column_0, column_1 and column_2 are categorical features
    • add a prefix name: for column name, e.g. categorical_feature=name:c1,c2,c3 means c1, c2 and c3 are categorical features
    • Note: only supports categorical with int type
    • Note: index starts from 0 and it doesn’t count the label column when passing type is int
    • Note: all values should be less than Int32.MaxValue (2147483647)
    • Note: using large values could be memory consuming. Tree decision rule works best when categorical features are presented by consecutive integers starting from zero
    • Note: all negative values will be treated as missing values
  • predict_raw_score ?︎, default = false, type = bool, aliases: is_predict_raw_scorepredict_rawscoreraw_score
    • used only in prediction task
    • set this to true to predict only the raw scores
    • set this to false to predict transformed scores
  • predict_leaf_index ?︎, default = false, type = bool, aliases: is_predict_leaf_indexleaf_index
    • used only in prediction task
    • set this to true to predict with leaf index of all trees
  • predict_contrib ?︎, default = false, type = bool, aliases: is_predict_contribcontrib
    • used only in prediction task
    • set this to true to estimate SHAP values, which represent how each feature contributes to each prediction
    • produces #features + 1 values where the last value is the expected value of the model output over the training data
    • Note: if you want to get more explanation for your model’s predictions using SHAP values like SHAP interaction values, you can install shap package
  • num_iteration_predict ?︎, default = -1, type = int
    • used only in prediction task
    • used to specify how many trained iterations will be used in prediction
    • <= 0 means no limit
  • pred_early_stop ?︎, default = false, type = bool
    • used only in prediction task
    • if true, will use early-stopping to speed up the prediction. May affect the accuracy
  • pred_early_stop_freq ?︎, default = 10, type = int
    • used only in prediction task
    • the frequency of checking early-stopping prediction
  • pred_early_stop_margin ?︎, default = 10.0, type = double
    • used only in prediction task
    • the threshold of margin in early-stopping prediction
  • convert_model_language ?︎, default = "", type = string
    • used only in convert_model task
    • only cpp is supported yet
    • if convert_model_language is set and task=train, the model will be also converted
    • Note: can be used only in CLI version
  • convert_model ?︎, default = gbdt_prediction.cpp, type = string, aliases: convert_model_file
    • used only in convert_model task
    • output filename of converted model
    • Note: can be used only in CLI version

目标参数

  • num_class ?︎, default = 1, type = int, aliases: num_classes, constraints: num_class > 0
    • used only in multi-class classification application
  • is_unbalance ?︎, default = false, type = bool, aliases: unbalanceunbalanced_sets
    • used only in binary application
    • set this to true if training data are unbalanced
    • Note: this parameter cannot be used at the same time with scale_pos_weight, choose only one of them
  • scale_pos_weight ?︎, default = 1.0, type = double, constraints: scale_pos_weight > 0.0
    • used only in binary application
    • weight of labels with positive class
    • Note: this parameter cannot be used at the same time with is_unbalance, choose only one of them
  • sigmoid ?︎, default = 1.0, type = double, constraints: sigmoid > 0.0
    • used only in binary and multiclassova classification and in lambdarank applications
    • parameter for the sigmoid function
  • boost_from_average ?︎, default = true, type = bool
    • used only in regressionbinary and cross-entropy applications
    • adjusts initial score to the mean of labels for faster convergence
  • reg_sqrt ?︎, default = false, type = bool
    • used only in regression application
    • used to fit sqrt(label) instead of original values and prediction result will be also automatically converted to prediction^2
    • might be useful in case of large-range labels
  • alpha ?︎, default = 0.9, type = double, constraints: alpha > 0.0
  • fair_c ?︎, default = 1.0, type = double, constraints: fair_c > 0.0
    • used only in fair regression application
    • parameter for Fair loss
  • poisson_max_delta_step ?︎, default = 0.7, type = double, constraints: poisson_max_delta_step > 0.0
    • used only in poisson regression application
    • parameter for Poisson regression to safeguard optimization
  • tweedie_variance_power ?︎, default = 1.5, type = double, constraints: 1.0 <= tweedie_variance_power < 2.0
    • used only in tweedie regression application
    • used to control the variance of the tweedie distribution
    • set this closer to 2 to shift towards a Gamma distribution
    • set this closer to 1 to shift towards a Poisson distribution
  • max_position ?︎, default = 20, type = int, constraints: max_position > 0
    • used only in lambdarank application
    • optimizes NDCG at this position
  • label_gain ?︎, default = 0,1,3,7,15,31,63,...,2^30-1, type = multi-double
    • used only in lambdarank application
    • relevant gain for labels. For example, the gain of label 2 is 3 in case of default label gains
    • separate by ,

评价指标

  • metric ?︎, default = "", type = multi-enum, aliases: metricsmetric_types
    • metric(s) to be evaluated on the evaluation set(s)
      • "" (empty string or not specified) means that metric corresponding to specified objective will be used (this is possible only for pre-defined objective functions, otherwise no evaluation metric will be added)
      • "None" (string, not a None value) means that no metric will be registered, aliases: nanullcustom
      • l1, absolute loss, aliases: mean_absolute_errormaeregression_l1
      • l2, square loss, aliases: mean_squared_errormseregression_l2regression
      • l2_root, root square loss, aliases: root_mean_squared_errorrmse
      • quantileQuantile regression
      • mapeMAPE loss, aliases: mean_absolute_percentage_error
      • huberHuber loss
      • fairFair loss
      • poisson, negative log-likelihood for Poisson regression
      • gamma, negative log-likelihood for Gamma regression
      • gamma_deviance, residual deviance for Gamma regression
      • tweedie, negative log-likelihood for Tweedie regression
      • ndcgNDCG, aliases: lambdarank
      • mapMAP, aliases: mean_average_precision
      • aucAUC
      • binary_loglosslog loss, aliases: binary
      • binary_error, for one sample: 0 for correct classification, 1 for error classification
      • multi_logloss, log loss for multi-class classification, aliases: multiclasssoftmaxmulticlassovamulticlass_ovaovaovr
      • multi_error, error rate for multi-class classification
      • xentropy, cross-entropy (with optional linear weights), aliases: cross_entropy
      • xentlambda, “intensity-weighted” cross-entropy, aliases: cross_entropy_lambda
      • kldivKullback-Leibler divergence, aliases: kullback_leibler
    • support multiple metrics, separated by ,
  • metric_freq ?︎, default = 1, type = int, aliases: output_freq, constraints: metric_freq > 0
    • frequency for metric output
  • is_provide_training_metric ?︎, default = false, type = bool, aliases: training_metricis_training_metrictrain_metric
    • set this to true to output metric result over training dataset
    • Note: can be used only in CLI version
  • eval_at ?︎, default = 1,2,3,4,5, type = multi-int, aliases: ndcg_eval_atndcg_atmap_eval_atmap_at
    • used only with ndcg and map metrics
    • NDCG and MAP evaluation positions, separated by ,

 

训练模型

num_round = 10
bst = lgb.train(param, train_data, num_round, valid_sets=[validation_data])
lightgbm.train(params, train_set, num_boost_round=100, valid_sets=None, valid_names=None, fobj=None, feval=None, init_model=None, feature_name='auto', categorical_feature='auto', early_stopping_rounds=None, evals_result=None, verbose_eval=True, learning_rates=None, keep_training_booster=False, callbacks=None)
  • params (dict) – Parameters for training.
  • train_set (Dataset) – Data to be trained on.
  • num_boost_round (intoptional (default=100)) – Number of boosting iterations.
  • valid_sets (list of Datasets or Noneoptional (default=None)) – List of data to be evaluated on during training.
  • valid_names (list of strings or Noneoptional (default=None)) – Names of valid_sets.
  • fobj (callable or Noneoptional (default=None)) – Customized objective function.
  • feval (callable or Noneoptional (default=None)) – Customized evaluation function. Should accept two parameters: preds, train_data, and return (eval_name, eval_result, is_higher_better) or list of such tuples. For multi-class task, the preds is group by class_id first, then group by row_id. If you want to get i-th row preds in j-th class, the access way is preds[j * num_data + i]. To ignore the default metric corresponding to the used objective, set the metric parameter to the string "None" in params.
  • init_model (stringBooster or Noneoptional (default=None)) – Filename of LightGBM model or Booster instance used for continue training.
  • feature_name (list of strings or 'auto'optional (default="auto")) – Feature names. If ‘auto’ and data is pandas DataFrame, data columns names are used.
  • categorical_feature (list of strings or int, or 'auto'optional (default="auto")) – Categorical features. If list of int, interpreted as indices. If list of strings, interpreted as feature names (need to specify feature_name as well). If ‘auto’ and data is pandas DataFrame, pandas categorical columns are used. All values in categorical features should be less than int32 max value (2147483647). Large values could be memory consuming. Consider using consecutive integers starting from zero. All negative values in categorical features will be treated as missing values.
  • early_stopping_rounds (int or Noneoptional (default=None)) – Activates early stopping. The model will train until the validation score stops improving. Validation score needs to improve at least every early_stopping_rounds round(s) to continue training. Requires at least one validation data and one metric. If there’s more than one, will check all of them. But the training data is ignored anyway. To check only the first metric you can pass in callbacks early_stoppingcallback with first_metric_only=True. The index of iteration that has the best performance will be saved in the best_iteration field if early stopping logic is enabled by setting early_stopping_rounds.
  • evals_result (dict or Noneoptional (default=None)) –

    This dictionary used to store all evaluation results of all the items in valid_sets.

  • Example

    With a valid_sets = [valid_set, train_set], valid_names = [‘eval’, ‘train’] and a params = {‘metric’: ‘logloss’} returns {‘train’: {‘logloss’: [‘0.48253’, ‘0.35953’, …]}, ‘eval’: {‘logloss’: [‘0.480385’, ‘0.357756’, …]}}.

  • verbose_eval (bool or intoptional (default=True)) –

    Requires at least one validation data. If True, the eval metric on the valid set is printed at each boosting stage. If int, the eval metric on the valid set is printed at every verbose_eval boosting stage. The last boosting stage or the boosting stage found by using early_stopping_rounds is also printed.

    Example

    With verbose_eval = 4 and at least one item in valid_sets, an evaluation metric is printed every 4 (instead of 1) boosting stages.

  • learning_rates (listcallable or Noneoptional (default=None)) – List of learning rates for each boosting round or a customized function that calculates learning_rate in terms of current number of round (e.g. yields learning rate decay).
  • keep_training_booster (booloptional (default=False)) – Whether the returned Booster will be used to keep training. If False, the returned value will be converted into _InnerPredictor before returning. You can still use _InnerPredictor as init_model for future continue training.
  • callbacks (list of callables or Noneoptional (default=None)) – List of callback functions that are applied at each iteration. See Callbacks in Python API for more information

训练模型后可以保存模型

bst.save_model('model.txt') # txt格式
json_model = bst.dump_model() # json格式

训练好的模型可以直接调用

bst = lgb.Booster(model_file='model.txt')  #init model

交叉验证

num_round = 10
lgb.cv(param, train_data, num_round, nfold=5) # 五折,每一折十轮

Early Stop

如果有验证集,你可以使用early stop来找到最佳数量的轮。该模型将进行训练,直到验证分数停止改善。验证分数至少需要提高每一项early_stopping_rounds才能继续训练。

bst = lgb.train(param, train_data, num_round, valid_sets=valid_sets, early_stopping_rounds=10)
bst.save_model('model.txt', num_iteration=bst.best_iteration)

预测

模型训练好后可以对数据进行预测

# 7 entities, each contains 10 features
data = np.random.rand(7, 10)
ypred = bst.predict(data)

稍微看了下论文,公式和算法流程有点 看不懂,但是大致知道了 LGB的改进在哪里

LGB主要是对计算速度和数据的稀疏进行了一些改进,针对计算速度问题提出了GOSS(Gradient Boosting Decision Tree);针对数据稀疏问题提出了EFB(Exclusive Feature Bundling)。

GOSS

核心思想是通过去掉部分梯度小的instances,用剩余梯度大的建立信息增益,主要依据在于梯度大的instances在计算信息增益中起到更重要的作用。大概步骤如下:

  • 根据instances的梯度绝对值对数据集Ac中的instances进行排序;
  • 选择梯度绝对值最大的a*100%instances并构成一个子集A;
  • 随机从剩下的(1-a)*100%instances中随机选取b*Ac够哦成一个子集B;
  • 最后在子集A并B上根据estimated variance gain(估计方差增益??)分割instances。

如下公式(看不懂):

算法流程如下(同看不懂):

EFB

核心思想是将互斥的特征捆绑在一起来减少特征数目。

算法流程如下(看不懂):

需要解决两个问题,1、确定哪些特征需要捆绑到一起;2、怎么建立捆绑关系

将特征划分为独立小块是一个NP难问题,所以先把捆绑问题转化为填色问题,将特征作为顶点并为每两个特征添加边,如果它们不相互排斥,那么我们使用一个可以合理生成的贪心算法 用于生成束的图着色的良好结果(具有恒定的近似比)。如果允许一定程度的互斥,我们可以得到数量更少的特征束,可以提高计算效率。大概步骤如下:

  • 构造一个带加权边的图,其权重对应于特征之间的conflicts;
  • 按图表中的度数按降序对要素进行排序;
  • 我们检查有序列表中的每个要素,并将其分配给具有小冲突的现有捆绑(由γ控制),或创建新捆绑。

建立捆绑关系的关键在于确保原始数据的值能够从特征束中识别。由于基于直方图的算法存储离散区而不是连续的特征值,因此我们可以通过让互斥特征属于不同的区间中从而构建特征束。 

评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值