1.参数input_model置为需要微调的模型,这种方式会额外增加n棵树继续学习;
2.lgb.train(init_model=微调所需模型,keep_training_booster=True)
model = lgb.train(params,
lgb_train,
num_boost_round=1000,
valid_sets=lgb_eval,
feature_name=x_cols,
early_stopping_rounds=10,
verbose_eval=False,
init_model=model, # 如果init_model不为None,那么就是在init_model基础上接着训练
keep_training_booster=True) # 增量训练