一、Analyzer
Analyzer类是所有分词器的基类,它是个抽象类,所有的子类必须实现
- @Override
- protected TokenStreamComponents createComponents(String fieldName, Reader reader) {
- return new TokenStreamComponents;
- }
TokenStreamComponents有两个构造方法
- Analyzer.TokenStreamComponents(Tokenizer source)
- Analyzer.TokenStreamComponents(Tokenizer source, TokenStream result)
二、TokenStream
TokenStream能够产生语汇单元序列的类,有两个类型的子类TokenFilter和Tokenizer
1.Tokenizer是通过读取字符串创建语汇单元的
2.TokenFilter封装另个TokenStream去处理语汇单元,产生新的语汇单元
三、构造一个分析器理解分析链的顺序很重要
看Lunene In Action书上写的,也不知道是不是这样理解的?
- public class StopAnalyzer2 extends Analyzer {
- private Set stopWords;
- public StopAnalyzer2(){
- stopWords=StopAnalyzer.ENGLISH_STOP_WORDS_SET;
- }
- public StopAnalyzer2(String[] stopWords){
- this.stopWords=StopFilter.makeStopSet(Version.LUCENE_42, stopWords);
- }
- @Override
- protected TokenStreamComponents createComponents(String arg0, Reader arg1) {
- //首先转换为小写的流
- Tokenizer t=new LowerCaseTokenizer(Version.LUCENE_42,arg1);
- //转换后用处理去除顿词
- return new TokenStreamComponents(t, new StopFilter(Version.LUCENE_42, t, (CharArraySet) stopWords));
- // TokenFilter tt=new LowerCaseFilter(Version.LUCENE_42, new LetterTokenizer(Version.LUCENE_42, arg1));
- // return new TokenStreamComponents(null,new StopFilter(Version.LUCENE_42, tt, (CharArraySet) stopWords));
- }
- //结果看是否能把大写The过滤掉
- public static void main(String[] args) throws IOException {
- AnalyzerUtils.displayTokes(new StopAnalyzer2(), "The I love you");
- }
- }
结果:
修改后
- @Override
- protected TokenStreamComponents createComponents(String arg0, Reader arg1) {
- TokenStream filter=new StopFilter(Version.LUCENE_42, new LetterTokenizer(Version.LUCENE_42, arg1), (CharArraySet) stopWords);
- return new TokenStreamComponents(t,new LowerCaseFilter(Version.LUCENE_42, filter));
- }
结果:
四、对不同分次器做的比较
书上的Util类这个是在网上找的
- public class AnalyzerUtils {
- public static void displayTokes(Analyzer analyzer,String text) throws IOException{
- displayTokes(analyzer.tokenStream("contents", new StringReader(text)));
- }
- public static void displayTokes(TokenStream tokenStream) throws IOException{
- CharTermAttribute termAttribute = tokenStream.addAttribute(CharTermAttribute.class);
- tokenStream.reset(); //此行,不能少,不然会报 java.lang.ArrayIndexOutOfBoundsException
- PositionIncrementAttribute posIncr=tokenStream.addAttribute(PositionIncrementAttribute.class);
- OffsetAttribute o=tokenStream.addAttribute(OffsetAttribute.class);
- TypeAttribute type=tokenStream.addAttribute(TypeAttribute.class);
- while(tokenStream.incrementToken()){
- System.out.print("["+termAttribute.toString()+" "+o.startOffset()+"->"+o.endOffset()+" "+type.type()+"]");
- }
- // while(tokenStream.incrementToken()){
- // System.out.print("["+termAttribute.toString()+" "+"]");
- //
- // }
- }
- }