IK Analyzer是基于lucene实现的分词开源框架,下载路径:http://code.google.com/p/ik-analyzer/downloads/list
需要在项目中引入:
IKAnalyzer.cfg.xml
IKAnalyzer2012.jar
lucene-core-3.6.0.jar
stopword.dic
项目中的应用
private String convertKeyword(String keyWord) throws Exception{
if(keyWord == null){
return "";
}
StringBuffer retuString = new StringBuffer();
//默认是最细粒度分词,智能切分词传true
IKAnalyzer analyzer = new IKAnalyzer();
TokenStream tokenStream = analyzer.tokenStream("content",
new StringReader(keyWord));
tokenStream.addAttribute(TermAttribute.class);
tokenStream.reset();
while (tokenStream.incrementToken()) {
TermAttribute charTermAttribute = (TermAttribute) tokenStream.getAttribute(TermAttribute.class);
String participle = charTermAttribute.toString().split("=").length > 1 ? charTermAttribute.toString().split("=")[1] : "";
retuString.append(participle).append(" ");
}
tokenStream.end();
tokenStream.close();
return retuString.toString();
}
示例代码如下(使用IK Analyzer):
- package com.haha.test;
-
- import java.io.IOException;
- import java.io.StringReader;
- import org.apache.lucene.analysis.Analyzer;
- import org.apache.lucene.analysis.TokenStream;
- import org.apache.lucene.analysis.tokenattributes.CharTermAttribute;
- import org.wltea.analyzer.lucene.IKAnalyzer;
-
- public class Test2 {
- public static void main(String[] args) throws IOException {
- String text="基于java语言开发的轻量级的中文分词工具包";
-
- Analyzer anal=new IKAnalyzer(true);
- StringReader reader=new StringReader(text);
-
- TokenStream ts=anal.tokenStream("", reader);
- CharTermAttribute term=ts.getAttribute(CharTermAttribute.class);
-
- while(ts.incrementToken()){
- System.out.print(term.toString()+"|");
- }
- reader.close();
- System.out.println();
- }
-
- }
运行后结果:
基于|Java|语言|开发|的|轻量级|的|中文|分词|工具包|
使用(lucene)实现:
- package com.haha.test;
-
- import java.io.IOException;
- import java.io.StringReader;
-
- import org.wltea.analyzer.core.IKSegmenter;
- import org.wltea.analyzer.core.Lexeme;
-
- public class Test3 {
-
- public static void main(String[] args) throws IOException {
- String text="基于java语言开发的轻量级的中文分词工具包";
- StringReader sr=new StringReader(text);
- IKSegmenter ik=new IKSegmenter(sr, true);
- Lexeme lex=null;
- while((lex=ik.next())!=null){
- System.out.print(lex.getLexemeText()+"|");
- }
- }
-
- }