xgboost自定义评价函数:多分类F1 macro 2020-09-24

在训练xgboost时,评价指标需要使用f1 macro,但是官方eval_metric中没有,所有需要自定义。

def f1_macro(preds, dtrain):
    """
    自定义f1_macro用于XGBoost的eval_metric.
    输入是xgboost模型预测的类别概率(margin aka probability)

    :param preds: 'numpy.ndarray'
        Estimated targets as returned by a classifier. Its Shape is (num_sample, num_class)
    :param dtrain: Dmatrix
        Data with the train labels
    :return:
    """
    #
    y_train = dtrain.get_label() # 'numpy.ndarray'
    y_pred = [np.argmax(d) for d in preds]
    return 'f1_macro', metrics.f1_score(y_train, y_pred, average='macro')

 

Python custom objective demo

https://github.com/dmlc/xgboost/blob/master/demo/guide-python/custom_objective.py

# user defined evaluation function, return a pair metric_name, result

# NOTE: when you do customized loss function, the default prediction value is
# margin, which means the prediction is score before logistic transformation.
def evalerror(preds, dtrain):
    labels = dtrain.get_label()
    preds = 1.0 / (1.0 + np.exp(-preds))  # transform raw leaf weight
    # return a pair metric_name, result. The metric name must not contain a
    # colon (:) or a space
    return 'my-error', float(sum(labels != (preds > 0.5))) / len(labels)

 

  • 1
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
以下是用lgb模型进行6分类,并使用StratifiedKFold,评价指标macro-f1的示例代码: ```python import lightgbm as lgb from sklearn.model_selection import StratifiedKFold from sklearn.metrics import f1_score # 假设数据集的特征矩阵为 X,标签为 y # 定义模型参数 params = { 'boosting_type': 'gbdt', 'objective': 'multiclass', 'num_class': 6, 'metric': 'multi_logloss', 'num_leaves': 31, 'learning_rate': 0.05, 'feature_fraction': 0.9, 'bagging_fraction': 0.8, 'bagging_freq': 5, 'verbose': -1, 'random_state': 2021 } # 定义StratifiedKFold交叉验证 n_splits = 5 skf = StratifiedKFold(n_splits=n_splits, shuffle=True, random_state=2021) # 定义输出变量 oof_preds = np.zeros(X.shape[0]) class_preds = np.zeros(X.shape[0]) # 开始交叉验证 for fold, (train_idx, valid_idx) in enumerate(skf.split(X, y)): print("Fold", fold+1) X_train, X_valid = X[train_idx], X[valid_idx] y_train, y_valid = y[train_idx], y[valid_idx] # 定义训练数据 lgb_train = lgb.Dataset(X_train, y_train) lgb_valid = lgb.Dataset(X_valid, y_valid) # 训练模型 model = lgb.train(params, lgb_train, valid_sets=[lgb_valid], num_boost_round=10000, early_stopping_rounds=100, verbose_eval=100) # 对验证集进行预测 valid_preds = model.predict(X_valid, num_iteration=model.best_iteration) oof_preds[valid_idx] = valid_preds.argmax(axis=1) class_preds[valid_idx] = valid_preds.max(axis=1) print("-" * 50) # 输出交叉验证结果 macro_f1 = f1_score(y, oof_preds, average='macro') print("Overall Macro-F1:", macro_f1) ``` 在这个示例中,我们使用了sklearn中的f1_score函数来计算macro-f1。在计算f1_score时,需要将参数average设为'macro'。最终输出结果为整个数据集上的macro-f1

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值