Lightgbm.train参数介绍

Signature:
lightgbm.train(
params: Dict[str, Any],
train_set: lightgbm.basic.Dataset,
num_boost_round: int = 100,
valid_sets: Optional[List[lightgbm.basic.Dataset]] = None,
valid_names: Optional[List[str]] = None,
fobj: Optional[Callable[[Union[List, numpy.ndarray], lightgbm.basic.Dataset], Tuple[Union[List, numpy.ndarray], Union[List, numpy.ndarray]]]] = None,
feval: Union[Callable[[Union[List, numpy.ndarray], lightgbm.basic.Dataset], Tuple[str, float, bool]], List[Callable[[Union[List, numpy.ndarray], lightgbm.basic.Dataset], Tuple[str, float, bool]]], NoneType] = None,
init_model: Union[str, pathlib.Path, lightgbm.basic.Booster, NoneType] = None,
feature_name: Union[List[str], str] = ‘auto’,
categorical_feature: Union[List[str], List[int], str] = ‘auto’,
early_stopping_rounds: Optional[int] = None,
evals_result: Optional[Dict[str, Any]] = None,
verbose_eval: Union[bool, int, str] = ‘warn’,
learning_rates: Union[List[float], Callable[[int], float], NoneType] = None,
keep_training_booster: bool = False,
callbacks: Optional[List[Callable]] = None,
) -> lightgbm.basic.Booster
Docstring:
Perform the training with given parameters.

Parameters

params : dict
Parameters for training.
train_set : Dataset
Data to be trained on.
num_boost_round : int, optional (default=100)
Number of boosting iterations.
valid_sets : list of Dataset, or None, optional (default=None)
List of data to be evaluated on during training.
valid_names : list of str, or None, optional (default=None)
Names of valid_sets.
fobj : callable or None, optional (default=None)
Customized objective function.
Should accept two parameters: preds, train_data,
and return (grad, hess).

    preds : list or numpy 1-D array
        The predicted values.
        Predicted values are returned before any transformation,
        e.g. they are raw margin instead of probability of positive class for binary task.
    train_data : Dataset
        The training dataset.
    grad : list or numpy 1-D array
        The value of the first order derivative (gradient) of the loss
        with respect to the elements of preds for each sample point.
    hess : list or numpy 1-D array
        The value of the second order derivative (Hessian) of the loss
        with respect to the elements of preds for each sample point.

For multi-class task, the preds is group by class_id first, then group by row_id.
If you want to get i-th row preds in j-th class, the access way is score[j * num_data + i]
and you should group grad and hess in this way as well.

feval : callable, list of callable, or None, optional (default=None)
Customized evaluation function.
Each evaluation function should accept two parameters: preds, train_data,
and return (eval_name, eval_result, is_higher_better) or list of such tuples.

    preds : list or numpy 1-D array
        The predicted values.
        If ``fobj`` is specified, predicted values are returned before any transformation,
        e.g. they are raw margin instead of probability of positive class for binary task in this case.
    train_data : Dataset
        The training dataset.
    eval_name : str
        The name of evaluation function (without whitespaces).
    eval_result : float
        The eval result.
    is_higher_better : bool
        Is eval result higher better, e.g. AUC is ``is_higher_better``.

For multi-class task, the preds is group by class_id first, then group by row_id.
If you want to get i-th row preds in j-th class, the access way is preds[j * num_data + i].
To ignore the default metric corresponding to the used objective,
set the ``metric`` parameter to the string ``"None"`` in ``params``.

init_model : str, pathlib.Path, Booster or None, optional (default=None)
Filename of LightGBM model or Booster instance used for continue training.
feature_name : list of str, or ‘auto’, optional (default=“auto”)
Feature names.
If ‘auto’ and data is pandas DataFrame, data columns names are used.
categorical_feature : list of str or int, or ‘auto’, optional (default=“auto”)
Categorical features.
If list of int, interpreted as indices.
If list of str, interpreted as feature names (need to specify feature_name as well).
If ‘auto’ and data is pandas DataFrame, pandas unordered categorical columns are used.
All values in categorical features should be less than int32 max value (2147483647).
Large values could be memory consuming. Consider using consecutive integers starting from zero.
All negative values in categorical features will be treated as missing values.
The output cannot be monotonically constrained with respect to a categorical feature.
early_stopping_rounds : int or None, optional (default=None)
Activates early stopping. The model will train until the validation score stops improving.
Validation score needs to improve at least every early_stopping_rounds round(s)
to continue training.
Requires at least one validation data and one metric.
If there’s more than one, will check all of them. But the training data is ignored anyway.
To check only the first metric, set the first_metric_only parameter to True in params.
The index of iteration that has the best performance will be saved in the best_iteration field
if early stopping logic is enabled by setting early_stopping_rounds.
evals_result : dict or None, optional (default=None)
Dictionary used to store all evaluation results of all the items in valid_sets.
This should be initialized outside of your call to train() and should be empty.
Any initial contents of the dictionary will be deleted.

.. rubric:: Example

With a ``valid_sets`` = [valid_set, train_set],
``valid_names`` = ['eval', 'train']
and a ``params`` = {'metric': 'logloss'}
returns {'train': {'logloss': ['0.48253', '0.35953', ...]},
'eval': {'logloss': ['0.480385', '0.357756', ...]}}.

verbose_eval : bool or int, optional (default=True)
Requires at least one validation data.
If True, the eval metric on the valid set is printed at each boosting stage.
If int, the eval metric on the valid set is printed at every verbose_eval boosting stage.
The last boosting stage or the boosting stage found by using early_stopping_rounds is also printed.

.. rubric:: Example

With ``verbose_eval`` = 4 and at least one item in ``valid_sets``,
an evaluation metric is printed every 4 (instead of 1) boosting stages.

learning_rates : list, callable or None, optional (default=None)
List of learning rates for each boosting round
or a callable that calculates learning_rate
in terms of current number of round (e.g. yields learning rate decay).
keep_training_booster : bool, optional (default=False)
Whether the returned Booster will be used to keep training.
If False, the returned value will be converted into _InnerPredictor before returning.
This means you won’t be able to use eval, eval_train or eval_valid methods of the returned Booster.
When your model is very large and cause the memory error,
you can try to set this param to True to avoid the model conversion performed during the internal call of model_to_string.
You can still use _InnerPredictor as init_model for future continue training.
callbacks : list of callable, or None, optional (default=None)
List of callback functions that are applied at each iteration.
See Callbacks in Python API for more information.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 2
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值