采用的网上的代码怎么都去不掉停用词,最后下载了一个源码观察了stopword.dic和IKAnalyzer.cfg,是放到SRC和bin目录下都同时放的,我只是它们放到src下,所以导致只能分词不能去停用词,放到bin下之后就可以去停用词了,而且,支持stopword.dic扩充,很方便
代码
import java.io.IOException;
import java.io.StringReader;
import org.apache.lucene.analysis.Analyzer;
import org.apache.lucene.analysis.TokenStream;
import org.apache.lucene.analysis.tokenattributes.CharTermAttribute;
import org.wltea.analyzer.core.IKSegmenter;
import org.wltea.analyzer.core.Lexeme;
import org.wltea.analyzer.lucene.IKAnalyzer;
public class OwnIKAnalyzer {
public static void main(String[] args) throws IOException {
String text="总的来说基于java语言开发的轻量级的中文分词工具包";
StringReader sr=new StringReader(text);
IKSegmenter ik=new IKSegmenter(sr, true);
Lexeme lex=null;
while((lex=ik.next())!=null){
System.out.print(lex.getLexemeText()+",");
}
}
}