根据[url=https://cwiki.apache.org/confluence/display/MAHOUT/Creating+Vectors+from+Text]wiki[/url]
[size=medium]mahout 产生 vector 的方式有2种:[/size]
[color=blue][size=large]#1 from lucene index to vector [/size][/color]
[color=blue][size=large]#2 from sequencefile to vector[/size][/color]
结果发现第二种也是要借助lucene的分词,也有设置“MAX SIZE OF DICTIONARY CHUNK IN MB TO KEEP IN MEMORY”,这个也是lucene里面设置的参数。
阿哈,那这样看来,无论哪种vector的生成方式,其实原理都是一样的,用lucene的index文件做文章,只不过第二种方式少了lucene index -> vector 的方式。
[size=medium]mahout 产生 vector 的方式有2种:[/size]
[color=blue][size=large]#1 from lucene index to vector [/size][/color]
$MAHOUT_HOME/bin/mahout lucene.vector <PATH TO DIRECTORY CONTAINING LUCENE INDEX> \
--output <PATH TO OUTPUT LOCATION> --field <NAME OF FIELD IN INDEX> --dictOut <PATH TO FILE TO OUTPUT THE DICTIONARY TO] \
<--max <Number of vectors to output>> <--norm {INF|integer >= 0}> <--idField <Name of the idField in the Lucene index>>
[color=blue][size=large]#2 from sequencefile to vector[/size][/color]
$MAHOUT_HOME/bin/mahout seq2sparse \
-i <PATH TO THE SEQUENCEFILES> -o <OUTPUT DIRECTORY WHERE VECTORS AND DICTIONARY IS GENERATED> \
<-wt <WEIGHTING METHOD USED> {tf|tfidf}> \
<-chunk <MAX SIZE OF DICTIONARY CHUNK IN MB TO KEEP IN MEMORY> 100> \
<-a <NAME OF THE LUCENE ANALYZER TO TOKENIZE THE DOCUMENT> org.apache.lucene.analysis.standard.StandardAnalyzer> \
<--minSupport <MINIMUM SUPPORT> 2> \
<--minDF <MINIMUM DOCUMENT FREQUENCY> 1> \
<--maxDFPercent <MAX PERCENTAGE OF DOCS FOR DF. VALUE BETWEEN 0-100> 99> \
<--norm <REFER TO L_2 NORM ABOVE>{INF|integer >= 0}>"
<-seq <Create SequentialAccessVectors>{false|true required for running some algorithms(LDA,Lanczos)}>"
结果发现第二种也是要借助lucene的分词,也有设置“MAX SIZE OF DICTIONARY CHUNK IN MB TO KEEP IN MEMORY”,这个也是lucene里面设置的参数。
阿哈,那这样看来,无论哪种vector的生成方式,其实原理都是一样的,用lucene的index文件做文章,只不过第二种方式少了lucene index -> vector 的方式。