Lucene的使用

 

如果你想快速查询你磁盘上文件,或查询邮件、Web页面,甚至查询存于数据库的数据,你都可以借助于Lucene来完成。但是要完成查询就必须先建立索引。首先从Lucene API说起:
1、 Lucene API(核心操作类)
IndexWriter创建和维护索引(向原索引中添加新Document,设置合并策略、优化等)
FSDirectory最主要用来存储索引文件的类,表示将索引文件存储到文件系统
Document索引和查询的原子单元,一个Document包含一系列Field
IndexReader一个抽象类,提供了访问索引的接口,当然访问索引也可以通过它的子类来完成
Analyzer分词类,它有一系列子类,都是用来将文本解析成TokenStream
Searcher用于查询索引的核心类
2、创建索引
Java代码
  1. Directory dir = FSDirectory.open(new File("lucene.blog"));
  2. IndexWriter writer = new IndexWriter(dir,new StandardAnalyzer(Version.LUCENE_29),true, IndexWriter.MaxFieldLength.UNLIMITED);
  3. Document doc = new Document();
  4. doc.add(new Field("id", "101", Field.Store.YES, Field.Index.NO));
  5. doc.add(new Field("name", "kobe bryant", Field.Store.YES, Field.Index.NO));
  6. writer.addDocument(doc);
  7. writer.optimize();
  8. writer.close();
Java代码 复制代码 收藏代码
  1. Directory dir = FSDirectory.open(new File("lucene.blog"));
  2. IndexWriter writer = new IndexWriter(dir,new StandardAnalyzer(Version.LUCENE_29),true, IndexWriter.MaxFieldLength.UNLIMITED);
  3. Document doc = new Document();
  4. doc.add(new Field("id", "101", Field.Store.YES, Field.Index.NO));
  5. doc.add(new Field("name", "kobe bryant", Field.Store.YES, Field.Index.NO));
  6. writer.addDocument(doc);
  7. writer.optimize();
  8. writer.close();
Directory dir = FSDirectory.open(new File("lucene.blog"));
IndexWriter writer = new IndexWriter(dir,new StandardAnalyzer(Version.LUCENE_29),true, IndexWriter.MaxFieldLength.UNLIMITED);
Document doc = new Document();
doc.add(new Field("id", "101", Field.Store.YES, Field.Index.NO));
doc.add(new Field("name", "kobe bryant", Field.Store.YES, Field.Index.NO));
writer.addDocument(doc);
writer.optimize();
writer.close();
如上所示将索引文件存储于工作目录下lucene.blog文件夹 ,创建了Document,向Document里添加了两个Field id和name,然后使用IndexWriter的addDocument(Document)方法将其添加到索引目录下的索引文件中,然后使用IndexWriter的optimize()方法进行对索引文件优化,最后关闭IndexWriter;
3、通过IndexWriter删除索引中Document
Java代码
  1. Directory dir = FSDirectory.open(new File("lucene.blog"));
  2. IndexWriter writer = new IndexWriter(dir, new StandardAnalyzer(Version.LUCENE_29), true, IndexWriter.MaxFieldLength.UNLIMITED);
  3. writer.deleteDocuments(new Term("id", "101"));
  4. writer.commit();
  5. writer.close();
Java代码 复制代码 收藏代码
  1. Directory dir = FSDirectory.open(new File("lucene.blog"));
  2. IndexWriter writer = new IndexWriter(dir, new StandardAnalyzer(Version.LUCENE_29), true, IndexWriter.MaxFieldLength.UNLIMITED);
  3. writer.deleteDocuments(new Term("id", "101"));
  4. writer.commit();
  5. writer.close();
Directory dir = FSDirectory.open(new File("lucene.blog"));
IndexWriter writer = new IndexWriter(dir, new StandardAnalyzer(Version.LUCENE_29), true, IndexWriter.MaxFieldLength.UNLIMITED);
writer.deleteDocuments(new Term("id", "101"));
writer.commit();
writer.close();
如上先打开索引位置(工作目录下lucene.blog文件夹 ),然后直接调运IndexWriter的deleteDocuments(Term)方法删除上面2中创建的Document,注意必须调运commit()方法,上面2中之所以没有commit()是因为optimize()方法中存在默认Commit方法;
4、通过IndexWriter更新索引中Document
Java代码
  1. Directory dir = FSDirectory.open(new File("lucene.blog"));
  2. IndexWriter writer = new IndexWriter(dir, new StandardAnalyzer(Version.LUCENE_29), true, IndexWriter.MaxFieldLength.UNLIMITED);
  3. Document doc = new Document();
  4. doc.add(new Field("id", "101", Field.Store.YES, Field.Index.ANALYZED)); // Field.Index.ANALYZED
  5. doc.add(new Field("name", "kylin soong", Field.Store.YES, Field.Index.ANALYZED));
  6. writer.updateDocument(new Term("id", "101"), doc);
  7. writer.commit();
  8. writer.close();
Java代码 复制代码 收藏代码
  1. Directory dir = FSDirectory.open(new File("lucene.blog"));
  2. IndexWriter writer = new IndexWriter(dir, new StandardAnalyzer(Version.LUCENE_29), true, IndexWriter.MaxFieldLength.UNLIMITED);
  3. Document doc = new Document();
  4. doc.add(new Field("id", "101", Field.Store.YES, Field.Index.NO));
  5. doc.add(new Field("name", "kylin soong", Field.Store.YES, Field.Index.NO));
  6. writer.updateDocument(new Term("id", "101"), doc);
  7. writer.commit();
  8. writer.close();
Directory dir = FSDirectory.open(new File("lucene.blog"));
IndexWriter writer = new IndexWriter(dir, new StandardAnalyzer(Version.LUCENE_29), true, IndexWriter.MaxFieldLength.UNLIMITED);
Document doc = new Document();
doc.add(new Field("id", "101", Field.Store.YES, Field.Index.NO));
doc.add(new Field("name", "kylin soong", Field.Store.YES, Field.Index.NO));
writer.updateDocument(new Term("id", "101"), doc);
writer.commit();
writer.close();
通过IndexWriter的updateDocument(Term, Document)来完成更新,具体是将包含Term("id", "101")的Document删除,然后将传入的Document添加到索引文件;
5、Field选项意义
Java代码
  1. Field field = new Field(
  2. "101",
  3. "kobe bryant",
  4. Field.Store.YES,
  5. Field.Index.ANALYZED,
  6. Field.TermVector.YES);
Java代码 复制代码 收藏代码
  1. Field field = new Field(
  2. "101",
  3. "kobe bryant",
  4. Field.Store.YES,
  5. Field.Index.ANALYZED,
  6. Field.TermVector.YES);
Field field = new Field(
		"101",
		"kobe bryant",
		Field.Store.YES,
		Field.Index.ANALYZED,
		Field.TermVector.YES);
如上代码显示Field各属性设置情况,下面简单说明这些属性选项的意义
Field.Store.*决定是否将Field的完全值进行存储,注意:不能将整个文本内容存储,这样导致索引文件过大
Field.Store.YES存储,一旦存储,你可以用完整的Field的完全值作为查询条件查询(id:101)
Field.Store.NO不存储
Field.Index.*控制Field的值是否可查询通过索引成的索引文件
Field.Index.ANALYZED用Analyzer将Field的值分词成多个Token
Field.Index.NOT_ANALYZED不对Field的值分词,将Field的值作为一个Token处理
Field.Index.ANALYZED_NO_NORMS类似ANALYZED,但不存常规信息到索引文件
Field.Index.NOT_ANALYZED_NO_NORMS类似NOT_ANALYZED,但不存常规信息到索引文件
Field.Index.NO不进行索引,Field的值不可被搜索
如果你想要检索出唯一的terms在搜索时,或对搜索结果进行加亮处理等操作是Field.TermVector.*是必要的
Field.TermVector.YES记录唯一的terms,当重复发生时记下重复数,在不做额外处理
Field.TermVector.WITH_POSITIONS在上面基础上记录下位置
Field.TermVector.WITH_OFFSETS在TermVector.YES基础上记录偏移量
Field.TermVector.WITH_POSITIONS_OFFSETS在TermVector.YES基础上记录偏移量和位置
Field.TermVector.NO不做任何处理
6、索引numbers
Java代码
  1. Document doc = new Document();
  2. NumericField field1 = new NumericField("id");
  3. field1.setIntValue(101);
  4. doc.add(field1);
  5. NumericField field2 = new NumericField("price");
  6. field1.setDoubleValue(123.50);
  7. doc.add(field2);
Java代码 复制代码 收藏代码
  1. Document doc = new Document();
  2. NumericField field1 = new NumericField("id");
  3. field1.setIntValue(101);
  4. doc.add(field1);
  5. NumericField field2 = new NumericField("price");
  6. field1.setDoubleValue(123.50);
  7. doc.add(field2);
Document doc = new Document();
NumericField field1 = new NumericField("id");
field1.setIntValue(101);
doc.add(field1);
NumericField field2 = new NumericField("price");
field1.setDoubleValue(123.50);
doc.add(field2);
如上所示为索引numbers方法;
7、索引Date和Time
Java代码
  1. Document doc = new Document();
  2. doc.add(new NumericField("timestamp").setLongValue(new Date().getTime()));
  3. doc.add(new NumericField("day").setIntValue((int) (new Date().getTime()/24/3600)));
  4. Calendar cal = Calendar.getInstance();
  5. cal.setTime(new Date());
  6. doc.add(new NumericField("dayOfMonth").setIntValue(cal.get(Calendar.DAY_OF_MONTH)));
Java代码 复制代码 收藏代码
  1. Document doc = new Document();
  2. doc.add(new NumericField("timestamp").setLongValue(new Date().getTime()));
  3. doc.add(new NumericField("day").setIntValue((int) (new Date().getTime()/24/3600)));
  4. Calendar cal = Calendar.getInstance();
  5. cal.setTime(new Date());
  6. doc.add(new NumericField("dayOfMonth").setIntValue(cal.get(Calendar.DAY_OF_MONTH)));
Document doc = new Document();
doc.add(new NumericField("timestamp").setLongValue(new Date().getTime()));
doc.add(new NumericField("day").setIntValue((int) (new Date().getTime()/24/3600)));
Calendar cal = Calendar.getInstance();
cal.setTime(new Date());
doc.add(new NumericField("dayOfMonth").setIntValue(cal.get(Calendar.DAY_OF_MONTH)));
实质上对Date和Time的处理是将Date和Time转化为numbers来处理,注意:当然也可以把Date和Time以及上面的numbers当做字符串来处理,不过这样影响查询;
8、IndexWriter的其他同法
Java代码
  1. Directory dir = FSDirectory.open(new File("lucene.blog"));
  2. IndexWriter writer = new IndexWriter(dir, new StandardAnalyzer(Version.LUCENE_29), true, IndexWriter.MaxFieldLength.LIMITED);
  3. writer.setMaxFieldLength(1);
  4. MergePolicy policy = new LogByteSizeMergePolicy(writer);
  5. writer.setMergePolicy(policy);
  6. writer.optimize(5);
  7. writer.close();
Java代码 复制代码 收藏代码
  1. Directory dir = FSDirectory.open(new File("lucene.blog"));
  2. IndexWriter writer = new IndexWriter(dir, new StandardAnalyzer(Version.LUCENE_29), true, IndexWriter.MaxFieldLength.LIMITED);
  3. writer.setMaxFieldLength(1);
  4. MergePolicy policy = new LogByteSizeMergePolicy(writer);
  5. writer.setMergePolicy(policy);
  6. writer.optimize(5);
  7. writer.close();
Directory dir = FSDirectory.open(new File("lucene.blog"));
IndexWriter writer = new IndexWriter(dir, new StandardAnalyzer(Version.LUCENE_29), true, IndexWriter.MaxFieldLength.LIMITED);
writer.setMaxFieldLength(1);
MergePolicy policy = new LogByteSizeMergePolicy(writer);
writer.setMergePolicy(policy);
writer.optimize(5);
writer.close();
如上IndexWriter.MaxFieldLength.LIMITED设定了Field截取功能,如果Field值相当长,而你只想索引Field值的前固定个字符,可以用Field截取功能来实现;IndexWriter的setMergePolicy(policy),可以设定合并策略,另外optimize(int maxNumSegments)方法可以通过参数设定优化成的Segment个数;
9、根据确定的term查询
Java代码
  1. IndexReader reader = IndexReader.open(FSDirectory.open(new File("lucene.blog")),true);
  2. IndexSearcher searcher = new IndexSearcher(reader);
  3. Term term = new Term("id","101");
  4. Query query = new TermQuery(term);
  5. TopDocs topDocs = searcher.search(query, 10);
  6. System.out.println(topDocs.totalHits);
  7. ScoreDoc[] docs = topDocs.scoreDocs;
  8. System.out.println(docs[0].doc + " " + docs[0].score);
  9. Document doc = searcher.doc(docs[0].doc);
  10. System.out.println(doc.get("id"));
Java代码 复制代码 收藏代码
  1. IndexReader reader = IndexReader.open(FSDirectory.open(new File("lucene.blog")),true);
  2. IndexSearcher searcher = new IndexSearcher(reader);
  3. Term term = new Term("id","101");
  4. Query query = new TermQuery(term);
  5. TopDocs topDocs = searcher.search(query, 10);
  6. System.out.println(topDocs.totalHits);
  7. ScoreDoc[] docs = topDocs.scoreDocs;
  8. System.out.println(docs[0].doc + " " + docs[0].score);
  9. Document doc = searcher.doc(docs[0].doc);
  10. System.out.println(doc.get("id"));
IndexReader reader = IndexReader.open(FSDirectory.open(new File("lucene.blog")),true);
IndexSearcher searcher = new IndexSearcher(reader); 
Term term = new Term("id","101");
Query query = new TermQuery(term);
TopDocs topDocs = searcher.search(query, 10);
System.out.println(topDocs.totalHits);
ScoreDoc[] docs = topDocs.scoreDocs;
System.out.println(docs[0].doc + " " + docs[0].score);
Document doc = searcher.doc(docs[0].doc);
System.out.println(doc.get("id"));
如上示例显示了一个Lucene查询的基本方法,IndexSearcher是核心的查询类,IndexReader 可以读取索引文件,IndexSearcher有一系列重载的Search()方法,可以根据传入不同参数进行不同查询处理,ScoreDoc数组保存查询结果,和相关得分;
10、根据QueryParser查询,并收集查询结果
Java代码
  1. IndexReader reader = IndexReader.open(FSDirectory.open(new File("lucene.blog")),true);
  2. IndexSearcher searcher = new IndexSearcher(reader);
  3. Analyzer analyzer = new StandardAnalyzer(Version.LUCENE_29);
  4. QueryParser parser = new QueryParser(Version.LUCENE_29,"name",analyzer);
  5. String queryString = "kobe";
  6. Query query = parser.parse(queryString);
  7. TopScoreDocCollector collector = TopScoreDocCollector.create(10, false);
  8. searcher.search(query, collector);
  9. ScoreDoc[] hits = collector.topDocs().scoreDocs;
  10. for(int i = 0 ; i < hits.length ; i ++) {
  11. Document doc = searcher.doc(hits[i].doc);
  12. String name = doc.get("name");
  13. if (name != null) {
  14. System.out.println(name);
  15. }
  16. }
Java代码 复制代码 收藏代码
  1. IndexReader reader = IndexReader.open(FSDirectory.open(new File("lucene.blog")),true);
  2. IndexSearcher searcher = new IndexSearcher(reader);
  3. Analyzer analyzer = new StandardAnalyzer(Version.LUCENE_29);
  4. QueryParser parser = new QueryParser(Version.LUCENE_29,"name",analyzer);
  5. String queryString = "kobe";
  6. Query query = parser.parse(queryString);
  7. TopScoreDocCollector collector = TopScoreDocCollector.create(10, false);
  8. searcher.search(query, collector);
  9. ScoreDoc[] hits = collector.topDocs().scoreDocs;
  10. for(int i = 0 ; i < hits.length ; i ++) {
  11. Document doc = searcher.doc(hits[i].doc);
  12. String name = doc.get("name");
  13. if (name != null) {
  14. System.out.println(name);
  15. }
  16. }
IndexReader reader = IndexReader.open(FSDirectory.open(new File("lucene.blog")),true);
IndexSearcher searcher = new IndexSearcher(reader);
Analyzer analyzer = new StandardAnalyzer(Version.LUCENE_29);
QueryParser parser = new QueryParser(Version.LUCENE_29,"name",analyzer);
String queryString = "kobe";
Query query = parser.parse(queryString);
TopScoreDocCollector collector = TopScoreDocCollector.create(10, false);
searcher.search(query, collector);
ScoreDoc[] hits = collector.topDocs().scoreDocs;
for(int i = 0 ; i < hits.length ; i ++) {
Document  doc = searcher.doc(hits[i].doc);
String name = doc.get("name");
if (name != null) {
	System.out.println(name);
}
}
如上为一个使用QueryParser查询关键字“kobe”的实例,另外还对查询结果进行了收集
11、使用Lucene图形化工具Luke来操作索引
Luke使用非常简单:
下载: http://code.google.com/p/luke/ 点击下载最新版本,下载完成直接点击下载的jar包,就可以进入图形化操作界面,选择索引的目录就可以对索引进行图形化操作
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值