lucene3.0_IndexSearcher分页

系列汇总:

lucene3.0_基础使用及注意事项汇总

 

在绝大多数项目中需要分页取出目标结果。lucene当中提供了现成的方法,使用很方便。

主要用到的方法(API):

 TopDocstopDocs(int start, int howMany) 
          Returns the documents in the rage [start ..

Returns the documents in the rage [start .. start+howMany) that were collected by this collector. Note that if start >= pq.size(), an empty TopDocs is returned, and if pq.size() - start < howMany, then only the available documents in [start .. pq.size()) are returned.
This method is useful to call in case pagination of search results is allowed by the search application, as well as it attempts to optimize the memory used by allocating only as much as requested by howMany.

关于给定的start 和 howMany,有以下情况:

1. start大于当前结果集合的大小,返回空;

2.start与howMany的和大于当前结果集的大小,返回从start开始,当前结果集的大小减去start条文档。

 

下面给出一个简单的实例:

package com.fox.search;

import java.io.File;
import java.io.IOException;

import org.apache.lucene.analysis.standard.StandardAnalyzer;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.queryParser.ParseException;
import org.apache.lucene.queryParser.QueryParser;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.ScoreDoc;
import org.apache.lucene.search.TopDocs;
import org.apache.lucene.search.TopScoreDocCollector;
import org.apache.lucene.store.FSDirectory;
import org.apache.lucene.store.SimpleFSDirectory;
import org.apache.lucene.util.Version;

public class Searcher {

String path
= " d:/realtime " ;
FSDirectory dir
= null ;
IndexReader reader
= null ;
IndexSearcher searcher
= null ;

public Searcher(){
try {
dir
= SimpleFSDirectory.open( new File(path));
reader
= IndexReader.open(dir);
searcher
= new IndexSearcher(reader);
}
catch (IOException e) {
e.printStackTrace();
}
}

/**
* 获取指定范围内的文档(该方法只打印文档内容,代表取得相应的文档)。
*
@param start 注意:从0开始计数
*
@param howMany
*/
public void getResults( int start , int howMany) {
try {
QueryParser parser
= new QueryParser(Version.LUCENE_30, " f " , new StandardAnalyzer(Version.LUCENE_30));
Query query
= parser.parse( " a:fox " );
//
TopScoreDocCollector results = TopScoreDocCollector.create(start + howMany, false );
searcher.search(query, results);
TopDocs tds
= results.topDocs(start, howMany);
ScoreDoc[] sd
= tds.scoreDocs;
for ( int i = 0 ; i < sd.length; i ++ ) {
System.out.println(reader.document(sd[i].doc));
}
}
catch (IOException e) {
e.printStackTrace();
}
catch (ParseException e) {
e.printStackTrace();
}
}

public static void main(String[] fox){
Searcher s
= new Searcher();
System.out.println(
" 第一页:-------------------- " );
s.getResults(
0 , 5 );
System.out.println(
" 第二页:-------------------- " );
s.getResults(
5 , 5 );
System.out.println(
" 第三页:-------------------- " );
s.getResults(
10 , 5 );
}

}

 

这里给自己留一个问题!

1.lucene返回的结果都是排好序的(默认按相关度排序),那么返回前面5条结果和返回最后5条结果都要经过排序阶段,是否效率是一样的呢?

2.通过实验证明:大数据量搜索的情况下,

static TopScoreDocCollectorcreate(int numHits, boolean docsScoredInOrder) 
          Creates a new TopScoreDocCollector given the number of hits to collect and whether documents are scored in order by the input Scorer to setScorer(Scorer).

numHits越大效率越低。那是为什么呢?

 

 

 

 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值