特征抽取TF-IDF
import org.apache.spark.ml.feature.{HashingTF,IDF,Tokenizer}
import org.apache.spark.sql.SparkSession
val spark=SparkSession.builder().
master("local").
appName("TF-IDF-Test").
getOrCreate()
import spark.implicits._//开启RDD隐式转换
val sentenceData=spark.createDataFrame(Seq(
(0, "I heard about Spark and I love Spark"),
(0, "I wish Java could use case classes"),
(1, "Logistic regression models are neat")
)).toDF("label","sentence")
val tokenizer=new Tokenizer().setInputCol("sentence").setOutputCol("words")
val wordsData=tokenizer.transform(sentenceData)
// 注意,通过hash的方式可能会映射到同一个值的情况,
// 即不同的原始特征通过Hash映射后是同一个值。为了降低这种情况出现的概率,
// 我们只能对特征向量升维。i.e., 提高hash表的桶数,
// 默认特征维度是 2^20 = 1,048,576.
// 哈希表的桶数为2000。
val hashingTF=new HashingTF().setInputCol("words").setOutputCol("rawFeatures").setNumFeatures(2000)
val featurizedData=hashingTF.transform(wordsData)
// 分词序列被变换成一个稀疏特征向量,
// 其中每个单词都被散列成了一个不同的索引值,
// 特征向量在某一维度上的值即该词汇在文档中出现的次数。
val idf=new IDF().setInputCol("rawFeatures").setOutputCol("features")
val idfModel=idf.fit(featurizedData)
// fit之后可产生类似语料库的东西
// 将词频特征在语料库中进行修正得到最后的特征向量
val rescaledData=idfModel.transform(featurizedData)
rescaledData.select("features","label").take(3).foreach(println)
结果为(2000为hash桶大小,后面为hash值,idf纠正后的权重向量,标签):
[(2000,[240,333,1105,1329,1357,1777],[0.6931471805599453,0.6931471805599453,1.3862943611198906,0.5753641449035617,0.6931471805599453,0.6931471805599453]),0]
[(2000,[213,342,489,495,1329,1809,1967],[0.6931471805599453,0.6931471805599453,0.6931471805599453,0.6931471805599453,0.28768207245178085,0.6931471805599453,0.6931471805599453]),0]
[(2000,[286,695,1138,1193,1604],[0.6931471805599453,0.6931471805599453,0.6931471805599453,0.6931471805599453,0.6931471805599453]),1]