上次进入了PhraseQuery类的 createWeight方法,其中当PhraseQuery中只有一个关键字时,那么便会调用,TermQuery的createWeight方法,那么开始研究
protected Weight createWeight(Searcher searcher) throws IOException {
return new TermWeight(searcher);
}
返回一个TermWeight对象,Weight是个接口,那么这个TermWeight是如何实现的Weight里面的方法呢?
public TermWeight(Searcher searcher)
throws IOException {
this.similarity = getSimilarity(searcher);
idf = similarity.idf(term, searcher); // compute idf
}
那么看 getSimilarity(searcher)方法,
public Similarity getSimilarity(Searcher searcher) {
return searcher.getSimilarity();
}
调用的是search的方法
public Similarity getSimilarity() {
return this.similarity;
}
返回的是search的similarity
private Similarity similarity = Similarity.getDefault();
调用的是Similarity中的getDefault()
public static Similarity getDefault() {
return Similarity.defaultImpl;
}
private static Similarity defaultImpl = new DefaultSimilarity();
原来是调用的子类,子类用的是默认的构造方法那么similarity的获得暂告以段落。
那么看similarity.idf(term, searcher)做了些什么?
public float idf(Term term, Searcher searcher) throws IOException {
return idf(searcher.docFreq(term), searcher.maxDoc());
}
public abstract float idf(int docFreq, int numDocs);抽象的子类找去
public float idf(int docFreq, int numDocs) {
return (float)(Math.log(numDocs/(double)(docFreq+1)) + 1.0);
}
挖终于发现了 原来是个公式哦
其中numDocs是索引中总共的文档数量,docFreq是包括当前关键字的文档总数,算出的idf就是反转文档频率
那么看来searcher.doFreq的方法已经搜出了结果?看看去
public int docFreq(Term term) throws IOException {
return reader.docFreq(term);
}
结果发现到了IndexReader里变成了抽象方法
private static IndexReader open(final Directory directory, final boolean closeDirectory, final IndexDeletionPolicy deletionPolicy) throws CorruptIndexException, IOException {
return DirectoryIndexReader.open(directory, closeDirectory, deletionPolicy);
}
最后从这里发现 原来搜索的关键全部在DirectoryIndexReader
看过才发现书上都讲的好浅
protected Weight createWeight(Searcher searcher) throws IOException {
return new TermWeight(searcher);
}
返回一个TermWeight对象,Weight是个接口,那么这个TermWeight是如何实现的Weight里面的方法呢?
public TermWeight(Searcher searcher)
throws IOException {
this.similarity = getSimilarity(searcher);
idf = similarity.idf(term, searcher); // compute idf
}
那么看 getSimilarity(searcher)方法,
public Similarity getSimilarity(Searcher searcher) {
return searcher.getSimilarity();
}
调用的是search的方法
public Similarity getSimilarity() {
return this.similarity;
}
返回的是search的similarity
private Similarity similarity = Similarity.getDefault();
调用的是Similarity中的getDefault()
public static Similarity getDefault() {
return Similarity.defaultImpl;
}
private static Similarity defaultImpl = new DefaultSimilarity();
原来是调用的子类,子类用的是默认的构造方法那么similarity的获得暂告以段落。
那么看similarity.idf(term, searcher)做了些什么?
public float idf(Term term, Searcher searcher) throws IOException {
return idf(searcher.docFreq(term), searcher.maxDoc());
}
public abstract float idf(int docFreq, int numDocs);抽象的子类找去
public float idf(int docFreq, int numDocs) {
return (float)(Math.log(numDocs/(double)(docFreq+1)) + 1.0);
}
挖终于发现了 原来是个公式哦
其中numDocs是索引中总共的文档数量,docFreq是包括当前关键字的文档总数,算出的idf就是反转文档频率
那么看来searcher.doFreq的方法已经搜出了结果?看看去
public int docFreq(Term term) throws IOException {
return reader.docFreq(term);
}
结果发现到了IndexReader里变成了抽象方法
private static IndexReader open(final Directory directory, final boolean closeDirectory, final IndexDeletionPolicy deletionPolicy) throws CorruptIndexException, IOException {
return DirectoryIndexReader.open(directory, closeDirectory, deletionPolicy);
}
最后从这里发现 原来搜索的关键全部在DirectoryIndexReader
看过才发现书上都讲的好浅