生命科学赛道模型融合实践 | 阿里云AI夏令营

生物学年龄评价与年龄相关疾病风险预测竞赛实践

1.竞赛链接:https://tianchi.aliyun.com/s/6a1351ecd2a3987995a7bda7f62542d2

2.竞赛目标:

比赛方提供健康人和年龄相关疾病患者的甲基化数据,从而实现根据某个人的甲基化数据为特征,来预测其年龄。

3.基本代码(数据导入+数据清洗+特征工程)

根据平台方提供的baseline进行数据导入,清洗和寻找特征。由于本人水平所限,对此数据集数据分析并进行特征选择后的结果不尽如人意,等待进一步的探索。基于硬件性能的考虑,我选择了数据集中10000条数据进行导入,原则上使用更多的数据能获得更好的结果。

#导入库文件
import numpy as np
import pandas as pd
from collections import defaultdict, Counter
import xgboost as xgb
import lightgbm as lgb
from catboost import CatBoostRegressor
from sklearn.model_selection import StratifiedKFold, KFold, GroupKFold
from sklearn.metrics import mean_squared_log_error,mean_absolute_error
import sys,os,gc,argparse,warnings
warnings.filterwarnings('ignore')

#导入数据
path = 'ai4bio'

#本次取10000条数据
traindata = pd.read_csv(f'{path}/traindata.csv', nrows=10000)
trainmap = pd.read_csv(f'{path}/trainmap.csv')

testdata = pd.read_csv(f'{path}/ai4bio_testset_final/testdata.csv', nrows=10000)
testmap = pd.read_csv(f'{path}/ai4bio_testset_final/testmap.csv')

traindata = traindata.set_index('cpgsite')
traindata = traindata.T
traindata = traindata.reset_index()
traindata = traindata.rename(columns={'index':'sample_id'})
traindata.columns = ['sample_id'] + [i for i in range(10000)]
traindata.to_pickle(f'{path}/traindata.pkl')

testdata = testdata.set_index('cpgsite')
testdata = testdata.T
testdata = testdata.reset_index()
testdata = testdata.rename(columns={'index':'sample_id'})
testdata.columns = ['sample_id'] + [i for i in range(10000)]
testdata.to_pickle(f'{path}/testdata.pkl')

trainmap = pd.read_csv(f'{path}/trainmap.csv')
testmap = pd.read_csv(f'{path}/ai4bio_testset_final/testmap.csv')

traindata = pd.read_pickle(f'{path}/traindata.pkl')
testdata = pd.read_pickle(f'{path}/testdata.pkl')

# 拼接数据集
traindata = traindata.merge(trainmap[['sample_id', 'age', 'gender', 'sample_type', 'disease']],on='sample_id',how='left')
testdata = testdata.merge(testmap[['sample_id', 'gender']],on='sample_id',how='left')

#数据清洗
disease_mapping = {
    'control': 0,
    "Alzheimer's disease": 1,
    "Graves' disease": 2,
    "Huntington's disease": 3,
    "Parkinson's disease": 4,
    'rheumatoid arthritis': 5,
    'schizophrenia': 6,
    "Sjogren's syndrome": 7,
    'stroke': 8,
    'type 2 diabetes': 9
}
sample_type_mapping = {'control': 0, 'disease tissue': 1}
gender_mapping = {'F': 0, 'M': 1}

traindata['disease'] = traindata['disease'].map(disease_mapping)
traindata['sample_type'] = traindata['sample_type'].map(sample_type_mapping)
traindata['gender'] = traindata['gender'].map(gender_mapping)

testdata['gender'] = testdata['gender'].map(gender_mapping)

#构建特征
traindata['max'] = traindata[[i for i in range(10000)]].max(axis=1)
traindata['min'] = traindata[[i for i in range(10000)]].min(axis=1)
traindata['std'] = traindata[[i for i in range(10000)]].std(axis=1)
traindata['var'] = traindata[[i for i in range(10000)]].var(axis=1)
traindata['skew'] = traindata[[i for i in range(10000)]].skew(axis=1)
traindata['mean'] = traindata[[i for i in range(10000)]].mean(axis=1)
traindata['median'] = traindata[[i for i in range(10000)]].median(axis=1)

testdata['max'] = testdata[[i for i in range(10000)]].max(axis=1)
testdata['min'] = testdata[[i for i in range(10000)]].min(axis=1)
testdata['std'] = testdata[[i for i in range(10000)]].std(axis=1)
testdata['var'] = testdata[[i for i in range(10000)]].var(axis=1)
testdata['skew'] = testdata[[i for i in range(10000)]].skew(axis=1)
testdata['mean'] = testdata[[i for i in range(10000)]].mean(axis=1)
testdata['median'] = testdata[[i for i in range(10000)]].median(axis=1)

# 入模特征选择
cols = [i for i in range(10000)] + ['gender','max','min','std','var','skew','mean','median']

4.特征融合

本次模型选择了三个模型分别是:lightgbm模型、xgboost模型、catboost模型,模型训练时间约为五个半小时,使用的是阿里云提供的V100平台。

4.1获得三个模型训练结果
def cv_model(clf, train_x, train_y, test_x, clf_name, seed = 2023):
    folds = 5
    kf = KFold(n_splits=folds, shuffle=True, random_state=seed)
    oof = np.zeros(train_x.shape[0])
    test_predict = np.zeros(test_x.shape[0])
    cv_scores = []
    
    for i, (train_index, valid_index) in enumerate(kf.split(train_x, train_y)):
        print('************************************ {} ************************************'.format(str(i+1)))
        trn_x, trn_y, val_x, val_y = train_x.iloc[train_index], train_y[train_index], train_x.iloc[valid_index], train_y[valid_index]
        
        if clf_name == "lgb":
            train_matrix = clf.Dataset(trn_x, label=trn_y)
            valid_matrix = clf.Dataset(val_x, label=val_y)
            params = {
                'boosting_type': 'gbdt',
                'objective': 'regression',
                'metric': 'mae',
                'min_child_weight': 6,
                'num_leaves': 2 ** 6,
                'lambda_l2': 10,
                'feature_fraction': 0.8,
                'bagging_fraction': 0.8,
                'bagging_freq': 4,
                'learning_rate': 0.1,
                'seed': 2023,
                'nthread' : 16,
                'verbose' : -1,
            }
            model = clf.train(params, train_matrix, 2000, valid_sets=[train_matrix, valid_matrix],
                              categorical_feature=[])
            val_pred = model.predict(val_x, num_iteration=model.best_iteration)
            test_pred = model.predict(test_x, num_iteration=model.best_iteration)
        
        if clf_name == "xgb":
            xgb_params = {
              'booster': 'gbtree', 
              'objective': 'reg:squarederror',
              'eval_metric': 'mae',
              'max_depth': 5,
              'lambda': 10,
              'subsample': 0.7,
              'colsample_bytree': 0.7,
              'colsample_bylevel': 0.7,
              'eta': 0.1,
              'tree_method': 'hist',
              'seed': 520,
              'nthread': 16
              }
            train_matrix = clf.DMatrix(trn_x , label=trn_y)
            valid_matrix = clf.DMatrix(val_x , label=val_y)
            test_matrix = clf.DMatrix(test_x)
            
            watchlist = [(train_matrix, 'train'),(valid_matrix, 'eval')]
            
            model = clf.train(xgb_params, train_matrix, num_boost_round=2000, evals=watchlist, verbose_eval=200, early_stopping_rounds=100)
            val_pred  = model.predict(valid_matrix)
            test_pred = model.predict(test_matrix)
            
        if clf_name == "cat":
            params = {'learning_rate': 0.1, 'depth': 5, 'bootstrap_type':'Bernoulli','random_seed':2023,
                      'od_type': 'Iter', 'od_wait': 100, 'random_seed': 11, 'allow_writing_files': False}
            
            model = clf(iterations=2000, **params)
            model.fit(trn_x, trn_y, eval_set=(val_x, val_y),
                      metric_period=200,
                      use_best_model=True, 
                      cat_features=[],
                      verbose=1)
            
            val_pred  = model.predict(val_x)
            test_pred = model.predict(test_x)
        
        oof[valid_index] = val_pred
        test_predict += test_pred / kf.n_splits
        
        score = mean_absolute_error(val_y, val_pred)
        cv_scores.append(score)
        print(cv_scores)
        
    return oof, test_predict

# 参考demo,具体对照baseline实践部分调用cv_model函数
# 选择lightgbm模型
lgb_oof, lgb_test = cv_model(lgb, traindata[cols], traindata['age'], testdata[cols], 'lgb')
# 选择xgboost模型
xgb_oof, xgb_test = cv_model(xgb, traindata[cols], traindata['age'], testdata[cols], 'xgb')
# 选择catboost模型
cat_oof, cat_test = cv_model(CatBoostRegressor, traindata[cols], traindata['age'], testdata[cols], 'cat')

# 进行取平均融合
final_test = (lgb_test + xgb_test + cat_test) / 3

lightgbm模型训练过程:其中的数组中的数值是score值
在这里插入图片描述
xgboost模型训练过程:
在这里插入图片描述
catboost模型训练过程:
在这里插入图片描述

4.2模型融合
def stack_model(oof_1, oof_2, oof_3, predictions_1, predictions_2, predictions_3, y):
    '''
    输入的oof_1, oof_2, oof_3可以对应lgb_oof,xgb_oof,cat_oof
    predictions_1, predictions_2, predictions_3对应lgb_test,xgb_test,cat_test
    '''
    //转换数组结构
    oof_1 = pd.DataFrame(oof_1)
    oof_2 = pd.DataFrame(oof_2)
    oof_3 = pd.DataFrame(oof_3)
    predictions_1 = pd.DataFrame(predictions_1)
    predictions_2 = pd.DataFrame(predictions_2)
    predictions_3 = pd.DataFrame(predictions_3)
    train_stack = pd.concat([oof_1, oof_2, oof_3], axis=1)
    test_stack = pd.concat([predictions_1, predictions_2, predictions_3], axis=1)
    
    oof = np.zeros((train_stack.shape[0],))
    predictions = np.zeros((test_stack.shape[0],))
    scores = []
    
    from sklearn.model_selection import RepeatedKFold
    from sklearn import linear_model

    folds = RepeatedKFold(n_splits=5, n_repeats=2, random_state=2021)
    
    for fold_, (trn_idx, val_idx) in enumerate(folds.split(train_stack, train_stack)): 
        print("fold n°{}".format(fold_+1))
        trn_data, trn_y = train_stack.loc[trn_idx], y[trn_idx]
        val_data, val_y = train_stack.loc[val_idx], y[val_idx]
        
        
        clf = linear_model.Ridge(random_state=2021)
        clf.fit(trn_data, trn_y)

        oof[val_idx] = clf.predict(val_data)
        predictions += clf.predict(test_stack) / (5 * 2)
        
        score_single = mean_absolute_error(val_y, oof[val_idx])
        scores.append(score_single)
        print(f'{fold_+1}/{5}', score_single)
    print('mean: ',np.mean(scores))
   
    return oof, predictions

在训练过程中遇到的问题:

  • cannot concatenate object of type ‘<class ‘numpy.ndarray’>’;
    解决方法:转换数组类型
oof_1 = pd.DataFrame(oof_1)
  • NameError: name ‘Ridge’ is not defined
    解决方法在Ridge前增加linear_model
clf = linear_model.Ridge(random_state=2021)
4.3模型训练
stack_oof, stack_pred = stack_model(lgb_oof, xgb_oof, cat_oof, lgb_test, xgb_test, cat_test, traindata['age'])

模型融合后的训练结果:
在这里插入图片描述
和上边三个模型分别训练得到的sroce值相比,经过模型训练后得到的sroce值更小,因此模型融合得到了更好的训练结果。

4.4导出数据
testdata['age'] = cat_test
testdata['age'] = testdata['age'].astype(float)
testdata['age'] = testdata['age'].apply(lambda x: x if x>0 else 0.0)
testdata['age'] = testdata['age'].apply(lambda x: '%.2f' % x)
testdata[['sample_id','age']].to_csv('submit.txt',index=False)
4.5数据上传至阿里云竞赛平台的结果

在这里插入图片描述

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值