最后一步,生成tf-idf
调用方法是TFIDFConverter.processTfIdf,继续以tf-vectors为输入目录
先是makePartialVectors,hadoop程序,Mapper是缺省的,Reducer是TFIDFPartialVectorReducer
@Override
protected void reduce(WritableComparable<?> key, Iterable<VectorWritable> values, Context context)
throws IOException, InterruptedException {
Iterator<VectorWritable> it = values.iterator();
if (!it.hasNext()) {
return;
}
Vector value = it.next().get();
Iterator<Vector.Element> it1 = value.iterateNonZero();
Vector vector = new RandomAccessSparseVector((int) featureCount, value.getNumNondefaultElements());
while (it1.hasNext()) {
Vector.Element e = it1.next();
if (!dictionary.containsKey(e.index())) {
continue;
}
long df = dictionary.get(e.index());
if (maxDf > -1 && df > maxDf) {
continue;
}
if (df < minDf) {
df = minDf;
}
vector.setQuick(e.index(), tfidf.calculate((int) e.get(), (int) df, (int) featureCount, (int) vectorCount));
}
if (sequentialAccess) {
vector = new SequentialAccessSparseVector(vector);
}
if (namedVector) {
vector = new NamedVector(vector, key.toString());
}
VectorWritable vectorWritable = new VectorWritable(vector);
context.write(key, vectorWritable);
}
key是文档id,value是index组成的vectors
重点的语句是
vector.setQuick(e.index(), tfidf.calculate((int) e.get(), (int) df, (int) featureCount, (int) vectorCount));
如理解tf-idf算法,上面语句应该很容易理解 -- tf * log(n/df),这里的算法是用luceneDefaultSimilarity计算的
public class TFIDF implements Weight {
private Similarity sim = new DefaultSimilarity();
public TFIDF() { }
public TFIDF(Similarity sim) {
this.sim = sim;
}
@Override
public double calculate(int tf, int df, int length, int numDocs) {
// ignore length
return sim.tf(tf) * sim.idf(df, numDocs);
}
}
最后由PartialVectorMerger.mergePartialVectors将各部分merge在一起