LDA Java 输入格式,了解Spark MLlib LDA输入格式

I am trying to implement LDA using Spark MLlib.

But I am having difficulty understanding input format. I was able to run its sample implementation to take input from a file which contains only number's as shown :

1 2 6 0 2 3 1 1 0 0 3

1 3 0 1 3 0 0 2 0 0 1

1 4 1 0 0 4 9 0 1 2 0

2 1 0 3 0 0 5 0 2 3 9

3 1 1 9 3 0 2 0 0 1 3

4 2 0 3 4 5 1 1 1 4 0

2 1 0 3 0 0 5 0 2 2 9

1 1 1 9 2 1 2 0 0 1 3

4 4 0 3 4 2 1 3 0 0 0

2 8 2 0 3 0 2 0 2 7 2

1 1 1 9 0 2 2 0 0 3 3

4 1 0 0 4 5 1 3 0 1 0

I understand the output format of this as explained here.

My use case is very simple, I have one data file with some sentences.

I want to convert this file into corpus so that to pass it to org.apache.spark.mllib.clustering.LDA.run().

My doubt is about what those numbers in input represent which is then zipWithIndex and passed to LDA? Is it like number 1 appearing everywhere represent same word or it is some kind of count?

解决方案

First you need to convert your sentences into vectors.

val documents: RDD[Seq[String]] = sc.textFile("yourfile").map(_.split(" ").toSeq)

val hashingTF = new HashingTF()

val tf: RDD[Vector] = hashingTF.transform(documents)

val idf = new IDF().fit(tf)

val tfidf: RDD[Vector] = idf.transform(tf)

val corpus = tfidf.zipWithIndex.map(_.swap).cache()

// Cluster the documents into three topics using LDA

val ldaModel = new LDA().setK(3).run(corpus)

Read more about TF_IDF vectorization here

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值