使用Lucene进行索引查询时发现有一部分词会被分词器直接忽略掉了,被忽略的分词称为stopWords,在英文中通常是一些语气助词或者无法表达明确含义的词。
在定义含有stopWords分词器的时候都会指定stopWords,如果没有指定可以引用默认的stopWords,在StandardAnalyzer、StopAnalyzer和ClassicAnalyzer分词器中stopWords是
"a", "an", "and", "are", "as", "at", "be", "but", "by",
"for", "if", "in", "into", "is", "it",
"no", "not", "of", "on", "or", "such",
"that", "the", "their", "then", "there", "these",
"they", "this", "to", "was", "will", "with"
如果想使用自定义的StopWords可以使用lucene提供的StopWordAnalyzer:
public static final String[] self_stop_words={ "a", "an", "and", "are", "as", "at", "be", "but", "by",
"for", "if", "in", "into", "is", "it",
"no", "not", "of", "on", "or", "such",
"that", "the", "their", "then", "there", "these",
"they", "this", "to", "was", "will", "with",
"very"
};
//Analyzer analyzer=new StopAnalyzer();
Analyzer analyzer=new StopAnalyzer(self_stop_words);