lightgbm过去版本安装包_lightgbm GPU版本安装

1ccb5189f9677602c26e25142f0d9aef.png

官网

https://lightgbm.readthedocs.io/en/latest/GPU-Windows.html

lightgbm GPU版本 Windows情况下安装:

https://www.jianshu.com/p/30555fd2bd50

以下基于ubuntu 16.04 python 3.6.5安装测试成功

1、安装软件依赖

sudo apt-get install --no-install-recommends git cmake build-essential libboost-dev libboost-system-dev libboost-filesystem-dev

2、安装python库

pip install setuptools wheel numpy scipy scikit-learn -U

3、安装lightGBM-GPU

sudo pip3.6 install lightgbm --install-option=--gpu --install-option="--opencl-include-dir=/usr/local/cuda/include/" --install-option="--opencl-library=/usr/local/cuda/lib64/libOpenCL.so"

4、测试

先下载测试文件并且进行文件转化

git clone https://github.com/guolinke/boosting_tree_benchmarks.git

cd boosting_tree_benchmarks/data

wget "https://archive.ics.uci.edu/ml/machine-learning-databases/00280/HIGGS.csv.gz"

gunzip HIGGS.csv.gz

python higgs2libsvm.py

编写测试脚本

import lightgbm as lgb

import time

params = {‘max_bin‘: 63,

‘num_leaves‘: 255,

‘learning_rate‘: 0.1,

‘tree_learner‘: ‘serial‘,

‘task‘: ‘train‘,

‘is_training_metric‘: ‘false‘,

‘min_data_in_leaf‘: 1,

‘min_sum_hessian_in_leaf‘: 100,

‘ndcg_eval_at‘: [1,3,5,10],

‘sparse_threshold‘: 1.0,

‘device‘: ‘gpu‘,

‘gpu_platform_id‘: 0,

‘gpu_device_id‘: 0}

dtrain = lgb.Dataset(‘data/higgs.train‘)

t0 = time.time()

gbm = lgb.train(params, train_set=dtrain, num_boost_round=10,

valid_sets=None, valid_names=None,

fobj=None, feval=None, init_model=None,

feature_name=‘auto‘, categorical_feature=‘auto‘,

early_stopping_rounds=None, evals_result=None,

verbose_eval=True,

keep_training_booster=False, callbacks=None)

t1 = time.time()

print(‘gpu version elapse time: {}‘.format(t1-t0))

params = {‘max_bin‘: 63,

‘num_leaves‘: 255,

‘learning_rate‘: 0.1,

‘tree_learner‘: ‘serial‘,

‘task‘: ‘train‘,

‘is_training_metric‘: ‘false‘,

‘min_data_in_leaf‘: 1,

‘min_sum_hessian_in_leaf‘: 100,

‘ndcg_eval_at‘: [1,3,5,10],

‘sparse_threshold‘: 1.0,

‘device‘: ‘cpu‘

}

t0 = time.time()

gbm = lgb.train(params, train_set=dtrain, num_boost_round=10,

valid_sets=None, valid_names=None,

fobj=None, feval=None, init_model=None,

feature_name=‘auto‘, categorical_feature=‘auto‘,

early_stopping_rounds=None, evals_result=None,

verbose_eval=True,

keep_training_booster=False, callbacks=None)

t1 = time.time()

print(‘cpu version elapse time: {}‘.format(t1-t0))

测试结果如下,可见gpu版确实比cpu快

5e1c854d3801736b3ba85ee2b21dddb8.png

作者:InterStellar1145

来源:CSDN

原文:https://blog.csdn.net/lccever/article/details/80535058

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值