Spark机器学习基础-无监督学习

0.K-means

 
  1. from __future__ import print_function

  2. from pyspark.ml.clustering import KMeans#硬聚类

  3. #from pyspark.ml.evaluation import ClusteringEvaluator#2.2版本支持评估,2.1版本不支持

  4. from pyspark.sql import SparkSession

  

! head -5 data/mllib/sample_kmeans_data.txt#展示前5行

  结果:

0 1:0.0 2:0.0 3:0.0
1 1:0.1 2:0.1 3:0.1
2 1:0.2 2:0.2 3:0.2
3 1:9.0 2:9.0 3:9.0
4 1:9.1 2:9.1 3:9.1
 
  1. spark = SparkSession\

  2. .builder\

  3. .appName("KMeansExample")\

  4. .getOrCreate()

  5.  
  6. dataset = spark.read.format("libsvm").load("data/mllib/sample_kmeans_data.txt")#libsvm主要用于保存稀疏数据

  7.  
  8. # 训练K-means聚类模型

  9. kmeans = KMeans().setK(2).setSeed(1)#setK设定聚类中心个数

  10. model = kmeans.fit(dataset)

  11.  
  12. # 预测(即分配聚类中心)

  13. predictions = model.transform(dataset)

  14.  
  15. # 根据Silhouette得分评估(pyspark2.2里新加)

  16. #evaluator = ClusteringEvaluator()

  17. #silhouette = evaluator.evaluate(predictions)

  18. #print("Silhouette with squared euclidean distance = " + str(silhouette))

  19.  
  20. # 输出预测结果

  21. print("predicted Center: ")

  22. for center in predictions[['prediction']].collect():

  23. print(center.asDict())

  24.  
  25. # 聚类中心

  26. centers = model.clusterCenters()

  27. print("Cluster Centers: ")

  28. for center in centers:

  29. print(center)

  30.  
  31. spark.stop()

  结果:

predicted Center: 
{'prediction': 0}
{'prediction': 0}
{'prediction': 0}
{'prediction': 1}
{'prediction': 1}
{'prediction': 1}
Cluster Centers: 
[ 0.1  0.1  0.1]
[ 9.1  9.1  9.1]

2.GMM模型

 
  1. from __future__ import print_function

  2. from pyspark.ml.clustering import GaussianMixture#软聚类,可以看看和KMeans的区别

  3. from pyspark.sql import SparkSession

  

 
  1. spark = SparkSession\

  2. .builder\

  3. .appName("GaussianMixtureExample")\

  4. .getOrCreate()

  5.  
  6. dataset = spark.read.format("libsvm").load("data/mllib/sample_kmeans_data.txt")

  7.  
  8. gmm = GaussianMixture().setK(2).setSeed(0)#setK(2)设定两个高斯成分,不同的均值和方差,利用不同的权重去拟合

  9. model = gmm.fit(dataset)

  10.  
  11. print("Gaussians shown as a DataFrame: ")

  12. model.gaussiansDF.show(truncate=False)

  13.  
  14. spark.stop()

  结果:

Gaussians shown as a DataFrame: 
+-------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|mean                                                         |cov                                                                                                                                                                                                     |
+-------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|[9.099999999999985,9.099999999999985,9.099999999999985]      |0.006666666666783764  0.006666666666783764  0.006666666666783764  
0.006666666666783764  0.006666666666783764  0.006666666666783764  
0.006666666666783764  0.006666666666783764  0.006666666666783764  |
|[0.10000000000001552,0.10000000000001552,0.10000000000001552]|0.006666666666806455  0.006666666666806455  0.006666666666806455  
0.006666666666806455  0.006666666666806455  0.006666666666806455  
0.006666666666806455  0.006666666666806455  0.006666666666806455  |
+-------------------------------------------------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

3.关联规则

我这里是pyspark 2.2以下的版本的写法,新版可以参考此程序之下的程序

 
  1. from pyspark.mllib.fpm import FPGrowth#2.1版本

  2. from pyspark.sql import SparkSession

  3. spark = SparkSession\

  4. .builder\

  5. .appName("FPGrowthExample")\

  6. .getOrCreate()

  7.  
  8. data = spark.sparkContext.textFile("data/mllib/sample_fpgrowth.txt")

  9. transactions = data.map(lambda line: line.strip().split(' '))

  10. model = FPGrowth.train(transactions, minSupport=0.2, numPartitions=10)

  11. result = model.freqItemsets().collect()

  12. for fi in result:

  13. print(fi)

  14.  
  15. spark.stop()

  结果:

FreqItemset(items=[u'z'], freq=5)
FreqItemset(items=[u'x'], freq=4)
FreqItemset(items=[u'x', u'z'], freq=3)
FreqItemset(items=[u'y'], freq=3)
FreqItemset(items=[u'y', u'x'], freq=3)
FreqItemset(items=[u'y', u'x', u'z'], freq=3)
FreqItemset(items=[u'y', u'z'], freq=3)
FreqItemset(items=[u'r'], freq=3)
FreqItemset(items=[u'r', u'x'], freq=2)
FreqItemset(items=[u'r', u'z'], freq=2)
FreqItemset(items=[u's'], freq=3)
FreqItemset(items=[u's', u'y'], freq=2)
FreqItemset(items=[u's', u'y', u'x'], freq=2)
FreqItemset(items=[u's', u'y', u'x', u'z'], freq=2)
FreqItemset(items=[u's', u'y', u'z'], freq=2)
FreqItemset(items=[u's', u'x'], freq=3)
FreqItemset(items=[u's', u'x', u'z'], freq=2)
FreqItemset(items=[u's', u'z'], freq=2)
FreqItemset(items=[u't'], freq=3)
FreqItemset(items=[u't', u'y'], freq=3)
FreqItemset(items=[u't', u'y', u'x'], freq=3)
FreqItemset(items=[u't', u'y', u'x', u'z'], freq=3)
FreqItemset(items=[u't', u'y', u'z'], freq=3)
FreqItemset(items=[u't', u's'], freq=2)
FreqItemset(items=[u't', u's', u'y'], freq=2)
FreqItemset(items=[u't', u's', u'y', u'x'], freq=2)
FreqItemset(items=[u't', u's', u'y', u'x', u'z'], freq=2)
FreqItemset(items=[u't', u's', u'y', u'z'], freq=2)
FreqItemset(items=[u't', u's', u'x'], freq=2)
FreqItemset(items=[u't', u's', u'x', u'z'], freq=2)
FreqItemset(items=[u't', u's', u'z'], freq=2)
FreqItemset(items=[u't', u'x'], freq=3)
FreqItemset(items=[u't', u'x', u'z'], freq=3)
FreqItemset(items=[u't', u'z'], freq=3)
FreqItemset(items=[u'p'], freq=2)
FreqItemset(items=[u'p', u'r'], freq=2)
FreqItemset(items=[u'p', u'r', u'z'], freq=2)
FreqItemset(items=[u'p', u'z'], freq=2)
FreqItemset(items=[u'q'], freq=2)
FreqItemset(items=[u'q', u'y'], freq=2)
FreqItemset(items=[u'q', u'y', u'x'], freq=2)
FreqItemset(items=[u'q', u'y', u'x', u'z'], freq=2)
FreqItemset(items=[u'q', u'y', u'z'], freq=2)
FreqItemset(items=[u'q', u't'], freq=2)
FreqItemset(items=[u'q', u't', u'y'], freq=2)
FreqItemset(items=[u'q', u't', u'y', u'x'], freq=2)
FreqItemset(items=[u'q', u't', u'y', u'x', u'z'], freq=2)
FreqItemset(items=[u'q', u't', u'y', u'z'], freq=2)
FreqItemset(items=[u'q', u't', u'x'], freq=2)
FreqItemset(items=[u'q', u't', u'x', u'z'], freq=2)
FreqItemset(items=[u'q', u't', u'z'], freq=2)
FreqItemset(items=[u'q', u'x'], freq=2)
FreqItemset(items=[u'q', u'x', u'z'], freq=2)
FreqItemset(items=[u'q', u'z'], freq=2)

 

 
  1. #pyspark 2.2写法

  2. spark = SparkSession\

  3. .builder\

  4. .appName("FPGrowthExample")\

  5. .getOrCreate()

  6.  
  7. df = spark.createDataFrame([

  8. (0, [1, 2, 5]),

  9. (1, [1, 2, 3, 5]),

  10. (2, [1, 2])

  11. ], ["id", "items"])

  12.  
  13. fpGrowth = FPGrowth(itemsCol="items", minSupport=0.5, minConfidence=0.6)

  14. model = fpGrowth.fit(df)

  15.  
  16. # Display frequent itemsets.

  17. model.freqItemsets.show()

  18.  
  19. # Display generated association rules.

  20. model.associationRules.show()

  21.  
  22. # transform examines the input items against all the association rules and summarize the

  23. # consequents as prediction

  24. model.transform(df).show()

  25.  
  26. spark.stop()

  

4.LDA主题模型

 
  1. from __future__ import print_function

  2. from pyspark.ml.clustering import LDA

  3. from pyspark.sql import SparkSession

  

! head -5 data/mllib/sample_lda_libsvm_data.txt

  结果:

0 1:1 2:2 3:6 4:0 5:2 6:3 7:1 8:1 9:0 10:0 11:3
1 1:1 2:3 3:0 4:1 5:3 6:0 7:0 8:2 9:0 10:0 11:1
2 1:1 2:4 3:1 4:0 5:0 6:4 7:9 8:0 9:1 10:2 11:0
3 1:2 2:1 3:0 4:3 5:0 6:0 7:5 8:0 9:2 10:3 11:9
4 1:3 2:1 3:1 4:9 5:3 6:0 7:2 8:0 9:0 10:1 11:3
 
  1. spark = SparkSession \

  2. .builder \

  3. .appName("LDAExample") \

  4. .getOrCreate()

  5.  
  6. # 加载数据

  7. dataset = spark.read.format("libsvm").load("data/mllib/sample_lda_libsvm_data.txt")

  8.  
  9. # 训练LDA模型

  10. lda = LDA(k=10, maxIter=10)#k=10:10个主题

  11. model = lda.fit(dataset)

  12.  
  13. ll = model.logLikelihood(dataset)

  14. lp = model.logPerplexity(dataset)

  15. print("The lower bound on the log likelihood of the entire corpus: " + str(ll))

  16. print("The upper bound on perplexity: " + str(lp)+"\n")

  17.  
  18. # 输出主题

  19. topics = model.describeTopics(3)#这里设置的是3个词作为截断,可以多设置几个

  20. print("The topics described by their top-weighted terms:")

  21. topics.show(truncate=False)

  22.  
  23. # 数据集解析

  24. print("transform dataset:\n")

  25. transformed = model.transform(dataset)

  26. transformed.show(truncate=False)

  27.  
  28. spark.stop()

  结果:

The lower bound on the log likelihood of the entire corpus: -806.81672765
The upper bound on perplexity: 3.10314127288

The topics described by their top-weighted terms:
+-----+-----------+---------------------------------------------------------------+
|topic|termIndices|termWeights                                                    |
+-----+-----------+---------------------------------------------------------------+
|0    |[4, 7, 10] |[0.10782283322528141, 0.09748059064869798, 0.09623489511403283]|
|1    |[1, 6, 9]  |[0.16755677717574005, 0.14746677066462868, 0.12291625834665457]|
|2    |[1, 3, 9]  |[0.10064404373379261, 0.10044232016910744, 0.09911430786912553]|
|3    |[3, 10, 4] |[0.2405485337093881, 0.11474862445349779, 0.09436360804237896] |
|4    |[9, 10, 3] |[0.10479881323144603, 0.10207366164963672, 0.0981847998287497] |
|5    |[8, 5, 7]  |[0.10843492932441408, 0.09701504850837554, 0.09334497740169005]|
|6    |[8, 5, 0]  |[0.09874156843227488, 0.09654281376143092, 0.09565958598645523]|
|7    |[9, 4, 7]  |[0.11252485087182341, 0.09755086126590837, 0.09643430677076377]|
|8    |[4, 1, 2]  |[0.10994282164614115, 0.09410686880245682, 0.09374715192052394]|
|9    |[5, 4, 0]  |[0.1526594065996145, 0.1401540984288492, 0.13878637240223393]  |
+-----+-----------+---------------------------------------------------------------+

transform dataset:

+-----+---------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|label|features                                                       |topicDistribution                                                                                                                                                                                                       |
+-----+---------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
|0.0  |(11,[0,1,2,4,5,6,7,10],[1.0,2.0,6.0,2.0,3.0,1.0,1.0,3.0])      |[0.004830688530254547,0.9563372032839312,0.004830653288196159,0.0049247000529390305,0.0048306686997597464,0.004830691229644231,0.004830725952841193,0.0048306754566327355,0.004830728026376915,0.004923265479424051]    |
|1.0  |(11,[0,1,3,4,7,10],[1.0,3.0,1.0,3.0,2.0,1.0])                  |[0.008057778649104173,0.3148301429775185,0.0080578223830065,0.008215777942952482,0.008057720361154553,0.008057732489412228,0.008057717726124932,0.00805779103670622,0.008057840506543925,0.6205496759274765]            |
|2.0  |(11,[0,1,2,5,6,8,9],[1.0,4.0,1.0,4.0,9.0,1.0,2.0])             |[0.00419974114073206,0.9620399748900924,0.004199830998962131,0.0042814231963878655,0.004199801535688566,0.004199819689459903,0.004199830433436027,0.0041997822111186295,0.004199798534630995,0.0042799973694913]        |
|3.0  |(11,[0,1,3,6,8,9,10],[2.0,1.0,3.0,5.0,2.0,3.0,9.0])            |[0.0037148958393689426,0.5313564622081751,0.00371492700514763,0.4388535874884561,0.0037150382511682853,0.0037149506801198505,0.0037149808253623792,0.0037148901801274804,0.0037149076678115434,0.003785359854262734]    |
|4.0  |(11,[0,1,2,3,4,6,9,10],[3.0,1.0,1.0,9.0,3.0,2.0,1.0,3.0])      |[0.0040247360335797875,0.004348642552867576,0.004024775025300721,0.9633765038034603,0.004024773228145383,0.004024740478088116,0.00402477627651187,0.004024779618260475,0.004024784270292531,0.004101488713493013]       |
|5.0  |(11,[0,1,3,4,5,6,7,8,9],[4.0,2.0,3.0,4.0,5.0,1.0,1.0,1.0,4.0]) |[0.003714916663186164,0.004014116840889892,0.0037150323955768686,0.003787652360887051,0.0037149873236278505,0.003714958841217428,0.0037149705182189397,0.003715010255807931,0.0037149614099447853,0.9661933933906431]   |
|6.0  |(11,[0,1,3,6,8,9,10],[2.0,1.0,3.0,5.0,2.0,2.0,9.0])            |[0.003863635977009055,0.46449322935025966,0.0038636657354113126,0.5045241029221541,0.00386374420636613,0.0038636976398721237,0.003863727255143564,0.0038636207140121358,0.003863650494529744,0.003936925705242072]      |
|7.0  |(11,[0,1,2,3,4,5,6,9,10],[1.0,1.0,1.0,9.0,2.0,1.0,2.0,1.0,3.0])|[0.004390966123798511,0.004744425233669778,0.004391025010757086,0.9600440191238313,0.004391023986304413,0.00439098335688734,0.004391015731875719,0.004391018535344605,0.0043910130377361935,0.004474509859794904]       |
|8.0  |(11,[0,1,3,4,5,6,7],[4.0,4.0,3.0,4.0,2.0,1.0,3.0])             |[0.004391082402111978,0.0047448016288253025,0.004391206864616806,0.004477234571510909,0.004391077028823487,0.004391110359190354,0.004391102894332411,0.004391148031605367,0.004391148275359693,0.9600400879436237]      |
|9.0  |(11,[0,1,2,4,6,8,9,10],[2.0,8.0,2.0,3.0,2.0,2.0,7.0,2.0])      |[0.0033302167331450425,0.9698997342829896,0.003330238365882342,0.003394964707825143,0.0033302157712121493,0.0033302303649837654,0.0033302236683277224,0.0033302294595984666,0.0033302405714942906,0.0033937060745413443]|
|10.0 |(11,[0,1,2,3,5,6,9,10],[1.0,1.0,1.0,9.0,2.0,2.0,3.0,3.0])      |[0.004199896541927494,0.00453848296824474,0.004200002237282065,0.9617819044818944,0.004200011124996577,0.004199942048495426,0.004199991764268097,0.004200001048497312,0.004199935367663148,0.004279832416731015]        |
|11.0 |(11,[0,1,4,5,6,7,9],[4.0,1.0,4.0,5.0,1.0,3.0,1.0])             |[0.004830560338779577,0.005219247495550288,0.004830593014957423,0.004924448157616727,0.00483055816775155,0.004830577856153918,0.004830584648561171,0.00483060040145597,0.004830612377397914,0.9560422175417754]         |
+-----+---------------------------------------------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

 

PCA降维

 
  1. from __future__ import print_function

  2. from pyspark.ml.feature import PCA

  3. from pyspark.ml.linalg import Vectors

  4. from pyspark.sql import SparkSession

  

 
  1. spark = SparkSession\

  2. .builder\

  3. .appName("PCAExample")\

  4. .getOrCreate()

  5.  
  6. # 构建一份fake data

  7. data = [(Vectors.sparse(5, [(1, 1.0), (3, 7.0)]),),#稀疏向量

  8. (Vectors.dense([2.0, 0.0, 3.0, 4.0, 5.0]),),#稠密向量

  9. (Vectors.dense([4.0, 0.0, 0.0, 6.0, 7.0]),)]

  10. df = spark.createDataFrame(data, ["features"])

  11.  
  12. # PCA降维

  13. pca = PCA(k=3, inputCol="features", outputCol="pcaFeatures")

  14. model = pca.fit(df)

  15.  
  16. result = model.transform(df).select("pcaFeatures")

  17. result.show(truncate=False)

  18.  
  19. spark.stop()

  结果:

+-----------------------------------------------------------+
|pcaFeatures                                                |
+-----------------------------------------------------------+
|[1.6485728230883807,-4.013282700516296,-5.524543751369388] |
|[-4.645104331781534,-1.1167972663619026,-5.524543751369387]|
|[-6.428880535676489,-5.337951427775355,-5.524543751369389] |
+-----------------------------------------------------------+

word2vec词嵌入

 
  1. from __future__ import print_function

  2. from pyspark.ml.feature import Word2Vec

  3. from pyspark.sql import SparkSession

  

 
  1. spark = SparkSession\

  2. .builder\

  3. .appName("Word2VecExample")\

  4. .getOrCreate()

  5.  
  6. # 输入是bag of words形式

  7. documentDF = spark.createDataFrame([

  8. ("Hi I heard about Spark".split(" "), ),

  9. ("I wish Java could use case classes".split(" "), ),

  10. ("Logistic regression models are neat".split(" "), )

  11. ], ["text"])

  12.  
  13. # 设置窗口长度等参数,词嵌入学习

  14. word2Vec = Word2Vec(vectorSize=3, minCount=0, inputCol="text", outputCol="result")

  15. model = word2Vec.fit(documentDF)

  16.  
  17. # 输出词和词向量

  18. model.getVectors().show()

  19.  
  20. result = model.transform(documentDF)

  21. for row in result.collect():

  22. text, vector = row

  23. print("Text: [%s] => \nVector: %s\n" % (", ".join(text), str(vector)))

  24.  
  25. spark.stop()

  结果:

+----------+--------------------+
|      word|              vector|
+----------+--------------------+
|     heard|[0.08829052001237...|
|       are|[-0.1314301639795...|
|      neat|[0.09875790774822...|
|   classes|[-0.0047773420810...|
|         I|[0.15081347525119...|
|regression|[-0.0732467696070...|
|  Logistic|[0.04169865325093...|
|     Spark|[-0.0096837198361...|
|     could|[-0.0907106027007...|
|       use|[-0.1245830804109...|
|        Hi|[0.03222155943512...|
|    models|[0.15642452239990...|
|      case|[-0.1072710305452...|
|     about|[0.13248910009860...|
|      Java|[0.08521263301372...|
|      wish|[0.02581630274653...|
+----------+--------------------+

Text: [Hi, I, heard, about, Spark] => 
Vector: [0.0788261869922,-0.00265940129757,0.0531761907041]

Text: [I, wish, Java, could, use, case, classes] => 
Vector: [-0.00935709210379,-0.015802019309,0.0161747672329]

Text: [Logistic, regression, models, are, neat] => 
Vector: [0.0184408299625,-0.012609430775,0.0135096866637]
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值