该文章继续对Analysis进行分析
算法:基于机械分词 1-gram,2-gram,HMM(如果使用ICTCLAS接口的话)
数据结构:部分源码用到了Set ,HashTable,HashMap
认真理解Token
Lucene中的Analysis包专门用于完成对于索引文件的分词.Lucene中的Token是一个非常重要的概念.
看一下其源码实现:
public final class Token {
String termText; // the text of the term
int startOffset; // start in source text
int endOffset; // end in source text
String type = "word"; // lexical type
private int positionIncrement = 1;
public Token(String text, int start, int end)
public Token(String text, int start, int end, String typ)
public void setPositionIncrement(int positionIncrement)
public int getPositionIncrement() { return positionIncrement; }
public final String termText() { return termText; }
public final int startOffset() { return startOffset; }
public void setStartOffset(int givenStartOffset)
public final int endOffset() { return endOffset; }
public void setEndOffset(int givenEndOffset)
public final String type() { return type; }
public String toString()
}
下面编一段代码来看一下
TestToken.java
package org.apache.lucene.analysis.test;
import org.apache.lucene.analysis.*;
import org.apache.lucene.analysis.standard.StandardAnalyzer;
import java.io.*;
public class TestToken
{
public static void main(String[] args)
{
String string = new String("我爱天大,但我更爱中国");
//Analyzer analyzer = new StandardAnalyzer();
Analyzer analyzer = new TjuChineseAnalyzer();
//Analyzer analyzer= new StopAnalyzer();
TokenStream ts = analyzer.tokenStream("dummy",new StringReader(string));
Token token;
try
{
int n=0;
while ( (token = ts.next()) != null)
{
System.out.println((n++)+"->"+token.toString());
}
}
catch(IOException ioe)
{
ioe.printStackTrace();
}
}
}
注意看其结果如下所示
0->Token's (termText,startOffset,endOffset,type,positionIncrement) is:(我,0,1,<CJK>,1)
1->Token's (termText,startOffset,endOffset,type,positionIncrement) is:(爱,1,2,<CJK>,1)
2->Token's (termText,startOffset,endOffset,type,positionIncrement) is:(天,2,3,<CJK>,1)
3->Token's (termText,startOffset,endOffset,type,positionIncrement) is:(大,3,4,<CJK>,1)
4->Token's (termText,startOffset,endOffset,type,positionIncrement) is:(但,5,6,<CJK>,1)
5->Token's (termText,startOffset,endOffset,type,positionIncrement) is:(我,6,7,<CJK>,1)
6->Token's (termText,startOffset,endOffset,type,positionIncrement) is:(更,7,8,<CJK>,1)
7->Token's (termText,startOffset,endOffset,type,positionIncrement) is:(爱,8,9,<CJK>,1)
8->Token's (termText,startOffset,endOffset,type,positionIncrement) is:(中,9,10,<CJK>,1)
9->Token's (termText,startOffset,endOffset,type,positionIncrement) is:(国,10,11,<CJK>,1)
注意:其中”,”被StandardAnalyzer给过滤掉了,所以大家注意第4个Token直接startOffset从5开始.
如果改用StopAnalyzer()
0->Token's (termText,startOffset,endOffset,type,positionIncrement) is:(我爱天大,0,4,word,1)
1->Token's (termText,startOffset,endOffset,type,positionIncrement) is:(但我更爱中国,5,11,word,1)
改用TjuChineseAnalyzer(我写的,下文会讲到如何去写)
0->Token's (termText,startOffset,endOffset,type,positionIncrement) is:(爱,3,4,word,1)
1->Token's (termText,startOffset,endOffset,type,positionIncrement) is:(天大,6,8,word,1)
2->Token's (termText,startOffset,endOffset,type,positionIncrement) is:(更,19,20,word,1)
3->Token's (termText,startOffset,endOffset,type,positionIncrement) is:(爱,22,23,word,1)
4->Token's (termText,startOffset,endOffset,type,positionIncrement) is:(中国,25,27,word,14