1. 使用Stanford Word Segmenter进行中文分词,下载地址http://nlp.stanford.edu/software/segmenter.shtml
2. 版本Version1.6.7
3. 将seg.jar放入ClassPath下,data目录放在src目录下
4. 编写测试程序,根据Demo
import java.util.Properties;
import edu.stanford.nlp.ie.crf.CRFClassifier;
public class SegDemo {
public static String doSegment(String data, CRFClassifier c) {
String[] strs = (String[]) c.segmentString(data).toArray();
StringBuffer buf = new StringBuffer();
for (String s : strs) {
buf.append(s + " ");
}
return buf.toString();
}
public static void main(String[] args) throws Exception {
Properties props = new Properties();
props.setProperty("sighanCorporaDict", "data");
props.setProperty("serDictionary", "data/dict-chris6.ser.gz");
props.setProperty("inputEncoding", "UTF-8");
props.setProperty("sighanPostProcessing", "true");
CRFClassifier classifier = new CRFClassifier(props);
classifier.loadClassifierNoExceptions("data/ctb.gz", props);
classifier.flags.setProperties(props);
String sentence = "他和我在学校里常打桌球。";
String ret = doSegment(sentence, classifier);
System.out.println(ret);
}
}
5. 加入VM运行参数
-mx1g
6. 运行结果
serDictionary=data/dict-chris6.ser.gz
sighanCorporaDict=data
inputEncoding=UTF-8
sighanPostProcessing=true
Loading classifier from data/ctb.gz ... Loading Chinese dictionaries from 1 files:
data/dict-chris6.ser.gz
loading dictionaries from data/dict-chris6.ser.gz...Done. Unique words in ChineseDictionary is: 423200
done [31.8 sec].
serDictionary=data/dict-chris6.ser.gz
sighanCorporaDict=data
inputEncoding=UTF-8
sighanPostProcessing=true
INFO: TagAffixDetector: useChPos=false | useCTBChar2=true | usePKChar2=false
INFO: TagAffixDetector: building TagAffixDetector from data/dict/character_list and data/dict/in.ctb
Loading character dictionary file from data/dict/character_list
Loading affix dictionary from data/dict/in.ctb
他 和 我 在 学校 里 常 打 桌球 。
7. 其它要注意的细节http://www.cnblogs.com/XP007/archive/2011/10/27/2227158.html