第一步文档:准备数据源,分词去停用词空格分隔,一行一篇备用;
第二步初始:将《机学走起第二式》第四步生成的IDF配置加载到容器中;
try {
String buf = new String();
BufferedReader br = new BufferedReader(new FileReader(XX_FIDF));
while (null != (buf = br.readLine())) {
String[] arr = buf.split("\t");
idf.put(arr[XX_IKEY], new Double(arr[XX_IVAL]));
}
br.close();
} catch (Exception e) {
e.printStackTrace();
}
第三步统计:统计当前词语在当前文档中累计出现的总次数;
String wds[] = null;
String buf = new String();
Map<String, Double> doc = new HashMap<>();
for (String w : (wds = buf.split(" "))) {
doc.put(w, null != (val = doc.get(w)) ? val + 1 : 1);
}
第四步计算:出现次数除以当前文档词语总数再乘以该词语的IDF值;
TF-IDF = 词频(TF) x 逆向文档频率(IDF)
Map<String, Double> doc = new HashMap<>();
Vector<Map.Entry<String, Double>> rst = new Vector<>();
for (Map.Entry<String, Double> v : doc.entrySet()) {
if (null != (val = idf.get(v.getKey()))) {
v.setValue(v.getValue() / wds.length * val);
}
else {
v.setValue(0.99999999);
}
rst.add(v);
}
第五步排序:对当前文档全量词语的TF-IDF值按照降序进行排序;
Collections.sort(rst, new Comparator<Map.Entry<String, Double>>() {
public int compare(Map.Entry<String, Double> n, Map.Entry<String, Double> m) {
return m.getValue().compareTo(n.getValue());
}
});
第六步输出:根据业务需要提取n个有价值的词语作为文档关键字;
for (int idx = 0; idx < rst.size() && idx < XX_SIZE; ++ idx) {
System.out.print(rst.get(idx).getKey() + ": [" + rst.get(idx).getValue() + "], ");
}
System.out.println();
第七步点赞:一边看博客一边下源码不打赏不点赞这样做真的好吗?
预告:《机学走起第四式:起飞》之LDA主题提取算法与实现, 严禁期待!