python 梯度提升树_【Spark机器学习速成宝典】模型篇07梯度提升树【Gradient-Boosted Trees】(Python版)...

#-*-coding=utf-8 -*-

from pyspark importSparkConf, SparkContext

sc= SparkContext('local')from pyspark.mllib.tree importGradientBoostedTrees, GradientBoostedTreesModelfrom pyspark.mllib.util importMLUtils#Load and parse the data file.

data = MLUtils.loadLibSVMFile(sc, "data/mllib/sample_libsvm_data.txt")'''每一行使用以下格式表示一个标记的稀疏特征向量

label index1:value1 index2:value2 ...

tempFile.write(b"+1 1:1.0 3:2.0 5:3.0\\n-1\\n-1 2:4.0 4:5.0 6:6.0")

>>> tempFile.flush()

>>> examples = MLUtils.loadLibSVMFile(sc, tempFile.name).collect()

>>> tempFile.close()

>>> examples[0]

LabeledPoint(1.0, (6,[0,2,4],[1.0,2.0,3.0]))

>>> examples[1]

LabeledPoint(-1.0, (6,[],[]))

>>> examples[2]

LabeledPoint(-1.0, (6,[1,3,5],[4.0,5.0,6.0]))'''

#Split the data into training and test sets (30% held out for testing) 分割数据集,留30%作为测试集

(trainingData, testData) = data.randomSplit([0.7, 0.3])#Train a GradientBoostedTrees model. 训练决策树模型#Notes: (a) Empty categoricalFeaturesInfo indicates all features are continuous. 空的categoricalFeaturesInfo意味着所有的特征都是连续的#(b) Use more iterations in practice. 在实践中使用更多的迭代步数

model =GradientBoostedTrees.trainClassifier(trainingData,

categoricalFeaturesInfo={}, numIterations=30)#Evaluate model on test instances and compute test error 评估模型

predictions = model.predict(testData.map(lambdax: x.features))

labelsAndPredictions= testData.map(lambdalp: lp.label).zip(predictions)

testErr=labelsAndPredictions.filter(lambda lp: lp[0] != lp[1]).count() /float(testData.count())print('Test Error =' + str(testErr)) #Test Error = 0.0

print('Learned classification GBT model:')print(model.toDebugString())'''TreeEnsembleModel classifier with 30 trees

Tree 0:

If (feature 434 <= 0.0)

If (feature 100 <= 165.0)

Predict: -1.0

Else (feature 100 > 165.0)

Predict: 1.0

Else (feature 434 > 0.0)

Predict: 1.0

Tree 1:

If (feature 490 <= 0.0)

If (feature 549 <= 253.0)

If (feature 184 <= 0.0)

Predict: -0.4768116880884702

Else (feature 184 > 0.0)

Predict: -0.47681168808847024

Else (feature 549 > 253.0)

Predict: 0.4768116880884694

Else (feature 490 > 0.0)

If (feature 215 <= 251.0)

Predict: 0.4768116880884701

Else (feature 215 > 251.0)

Predict: 0.4768116880884712

...

Tree 29:

If (feature 434 <= 0.0)

If (feature 209 <= 4.0)

Predict: 0.1335953290513215

Else (feature 209 > 4.0)

If (feature 372 <= 84.0)

Predict: -0.13359532905132146

Else (feature 372 > 84.0)

Predict: -0.1335953290513215

Else (feature 434 > 0.0)

Predict: 0.13359532905132146'''

#Save and load model

model.save(sc, "myGradientBoostingClassificationModel")

sameModel= GradientBoostedTreesModel.load(sc,"myGradientBoostingClassificationModel")print sameModel.predict(data.collect()[0].features) #0.0

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
梯度提升(Gradient Boosting Tree)是一种集成学习算法,它通过将多个决策组合起来,来提高模型的准确性和稳定性。在每一次迭代中,梯度提升会基于前面的决策的误差来构造新的决策,以减少模型的误差。以下是使用Python实现梯度提升的示例代码: 首先需要导入需要的库,如下所示: ```python from sklearn.ensemble import GradientBoostingRegressor from sklearn.metrics import mean_squared_error from sklearn.datasets import load_boston from sklearn.model_selection import train_test_split ``` 接下来,我们加载一个用于回归问题的数据集(波士顿房价数据集),并将其划分为训练集和测试集: ```python boston = load_boston() X_train, X_test, y_train, y_test = train_test_split(boston.data, boston.target, test_size=0.2, random_state=1) ``` 然后,我们可以创建一个梯度提升模型,并使用训练集对其进行拟合: ```python gbt = GradientBoostingRegressor(n_estimators=500, learning_rate=0.1, max_depth=4, random_state=1) gbt.fit(X_train, y_train) ``` 在训练完成后,我们可以使用测试集来评估模型的性能: ```python mse = mean_squared_error(y_test, gbt.predict(X_test)) print("MSE: %.4f" % mse) ``` 最后,我们可以使用训练好的模型来进行预测: ```python print("Predicted value:", gbt.predict(X_test[0].reshape(1, -1))) ``` 以上就是一个简单的梯度提升Python实现示例。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值