pyspark训练模型demo(一)

pyspark训练模型demo

第一章:分类模型



一、逻辑回归模型

第一步:通过pandas、createDataFrame创造模型原始数据:

# spark version 3.0.1
from pyspark.ml.classification import LogisticRegression
import pandas as pd

# 模型数据
pandas_df = pd.DataFrame({
    'a': [1,1,0,1,0],
    'b': [1,0,1,1,1],
    'c': [0,1,0,0,0],
    'y': [0,0,0,1,1],
    'id':['A001', 'A002', 'A003','A004','A005']
})

df = spark.createDataFrame(pandas_df).select("id","a","b","c","y")
df.show()
+----+---+---+---+---+
|  id|  a|  b|  c|  y|
+----+---+---+---+---+
|A001|  1|  1|  0|  0|
|A002|  1|  0|  1|  0|
|A003|  0|  1|  0|  0|
|A004|  1|  1|  0|  1|
|A005|  0|  1|  0|  1|
+----+---+---+---+---+

第二步:features向量化、标准化处理

from pyspark.ml.feature import VectorAssembler
from pyspark.ml.feature import Normalizer

# 生成features
vecAss = VectorAssembler(inputCols=['a','b','c'], outputCol='features')
df_features = vecAss.transform(df)
+----+---+---+---+---+-------------+
|  id|  a|  b|  c|  y|     features|
+----+---+---+---+---+-------------+
|A001|  1|  1|  0|  0|[1.0,1.0,0.0]|
|A002|  1|  0|  1|  0|[1.0,0.0,1.0]|
|A003|  0|  1|  0|  0|[0.0,1.0,0.0]|
|A004|  1|  1|  0|  1|[1.0,1.0,0.0]|
|A005|  0|  1|  0|  1|[0.0,1.0,0.0]|
+----+---+---+---+---+-------------+

标准化features

Norm = Normalizer(inputCol="features", outputCol="normFeatures", p=1.0)
df_norm_features = Norm.transform(df_features)
df_norm_features.show()
+----+---+---+---+---+-------------+-------------+
|  id|  a|  b|  c|  y|     features| normFeatures|
+----+---+---+---+---+-------------+-------------+
|A001|  1|  1|  0|  0|[1.0,1.0,0.0]|[0.5,0.5,0.0]|
|A002|  1|  0|  1|  0|[1.0,0.0,1.0]|[0.5,0.0,0.5]|
|A003|  0|  1|  0|  0|[0.0,1.0,0.0]|[0.0,1.0,0.0]|
|A004|  1|  1|  0|  1|[1.0,1.0,0.0]|[0.5,0.5,0.0]|
|A005|  0|  1|  0|  1|[0.0,1.0,0.0]|[0.0,1.0,0.0]|
+----+---+---+---+---+-------------+-------------+

第三步:模型训练

# 模型训练
model = LogisticRegression(featuresCol='normFeatures', labelCol='y',maxIter=100,tol=1e-06,threshold=0.5,predictionCol='prediction',
	probabilityCol='probability', rawPredictionCol='rawPrediction',standardization=True).fit(df_norm_features)
print(model.coefficients)
[1.8029996152867545,1.803003434834563,-36.96577573215852]
print(model.intercept)
-1.80300332247

第四步:模型预测

# 模型预测
result = model.transform(df_norm_features)
result.show()

+----+---+---+---+---+-------------+-------------+--------------------+--------------------+----------+
|  id|  a|  b|  c|  y|     features| normFeatures|       rawPrediction|         probability|prediction|
+----+---+---+---+---+-------------+-------------+--------------------+--------------------+----------+
|A001|  1|  1|  0|  0|[1.0,1.0,0.0]|[0.5,0.5,0.0]|[1.79741316608250...|[0.50000044935329...|       0.0|
|A002|  1|  0|  1|  0|[1.0,0.0,1.0]|[0.5,0.0,0.5]|[19.3843913809097...|[0.99999999618525...|       0.0|
|A003|  0|  1|  0|  0|[0.0,1.0,0.0]|[0.0,1.0,0.0]|[-1.1236073826914...|[0.49999997190981...|       1.0|
|A004|  1|  1|  0|  1|[1.0,1.0,0.0]|[0.5,0.5,0.0]|[1.79741316608250...|[0.50000044935329...|       0.0|
|A005|  0|  1|  0|  1|[0.0,1.0,0.0]|[0.0,1.0,0.0]|[-1.1236073826914...|[0.49999997190981...|       1.0|
+----+---+---+---+---+-------------+-------------+--------------------+--------------------+----------+
result.printSchema()

第五步:

[1] https://spark.apache.org/docs/3.0.0/api/python/pyspark.ml.html#pyspark.ml.Model

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值