设置环境变量
将下载的最新的Lucene版本解压到工作目录,并将以下4个jar包设置到CLASSPATH环境变量
Lucene JAR:lucene-core-{version}.jar
Queryparser JAR:lucene-queryparser-{version}.jar
Common analysis JAR:lucene-analyzers-common-{version}.jar
Lucene demo JAR:lucene-demo-{version}.jar
建立索引并查找
打开cmd,输入如下命令,将会为path-to-lucene}/src文件目录下的所有文件建立索引:
java rg.apache.lucene.demo.IndexFiles –docs {path-to-lucene}/src
输入以下命令,进行查找操作:
java org.apache.lucene.demo.SearchFiles
IndexFiles
IndexFiles类中的main()方法解析cmd中输入的命令行参数,接着实例化一个IndexWriter,打开指定的路径并实例化分词器StandardAnalyzer 和IndexWriterConfig。
IndexWriterConfig中的所有配置主要针对的是IndexWriter,一旦IndexWriter通过它创建后,该类其后的任何更改都不会对IndexWriter产生任何影响。
Lucene Directory被IndexWriter用来存储索引列表,它的子类可以实现将索引保存至RAM、数据库等存储介质。
Lucene 分词器按管道的方式执行,将文本分割成索引标识并且有选择性的做些筛选、踢出、过滤掉不需要的标识等。例如StandardAnalyer分词器,利用特定算法中的分词规则进行分词、将小写词汇转换为大写、过滤掉停顿词、剔除掉低检索价值的词汇。另外,根据不同的语言也有不同分词规则,所以得选择相应的分词器来进行分词处理。Lucene目前提供针对不同语言的分词器。
Searching Files
Searching Files主要是同IndexSearcher、StandardAnalyzer、QueryParser的协同执行。查询解析器通过解析器构造一个实例,并如同对文档解析一样对查询语句进行解析,这些解析如:查找词边界、剔除如a/an之类的无用词汇。Query对象包含了QueryParser解析的结果。
SearchFile利用IndexSearcher.search(query,n)方法返回TopDocs结果。
相关类的继承关系
Simple DEMO
下面是一个简单的demo:HelloLucene.Java
import org.apache.lucene.analysis.standard.StandardAnalyzer;
import org.apache.lucene.document.Document;
import org.apache.lucene.document.Field;
import org.apache.lucene.document.StringField;
import org.apache.lucene.document.TextField;
import org.apache.lucene.index.DirectoryReader;
import org.apache.lucene.index.IndexReader;
import org.apache.lucene.index.IndexWriter;
import org.apache.lucene.index.IndexWriterConfig;
import org.apache.lucene.queryparser.classic.ParseException;
import org.apache.lucene.queryparser.classic.QueryParser;
import org.apache.lucene.search.IndexSearcher;
import org.apache.lucene.search.Query;
import org.apache.lucene.search.ScoreDoc;
import org.apache.lucene.search.TopScoreDocCollector;
import org.apache.lucene.store.Directory;
import org.apache.lucene.store.RAMDirectory;
import org.apache.lucene.util.Version;
import java.io.IOException;
public class HelloLucene {
public static void main(String[] args) throws IOException, ParseException {
// 0. Specify the analyzer for tokenizing text.
// The same analyzer should be used for indexing and searching
StandardAnalyzer analyzer = new StandardAnalyzer(Version.LUCENE_42);
// 1. create the index
Directory index = new RAMDirectory();
IndexWriterConfig config = new IndexWriterConfig(Version.LUCENE_42, analyzer);
IndexWriter w = new IndexWriter(index, config);
addDoc(w, "Lucene in action", "193398817");
addDoc(w, "Lucene for Dummies", "55320055Z");
addDoc(w, "Managing Gigabytes", "55063554A");
addDoc(w, "The Art of Computer Science", "9900333X");
w.close();
// 2. query
String querystr = args.length > 0 ? args[0] : "lucene";
// the "title" arg specifies the default field to use
// when no field is explicitly specified in the query.
Query q = new QueryParser(Version.LUCENE_42, "title", analyzer).parse(querystr);
// 3. search
int hitsPerPage = 10;
IndexReader reader = DirectoryReader.open(index);
IndexSearcher searcher = new IndexSearcher(reader);
TopScoreDocCollector collector = TopScoreDocCollector.create(hitsPerPage, true);
searcher.search(q, collector);
ScoreDoc[] hits = collector.topDocs().scoreDocs;
// 4. display results
System.out.println("Found " + hits.length + " hits.");
for(int i=0;i<hits.length;++i) {
int docId = hits[i].doc;
Document d = searcher.doc(docId);
System.out.println((i + 1) + ". " + d.get("isbn") + "\t" + d.get("title"));
}
// reader can only be closed when there
// is no need to access the documents any more.
reader.close();
}
private static void addDoc(IndexWriter w, String title, String isbn) throws IOException {
Document doc = new Document();
doc.add(new TextField("title", title, Field.Store.YES));
// use a string field for isbn because we don't want it tokenized
doc.add(new StringField("isbn", isbn, Field.Store.YES));
w.addDocument(doc);
}
}