Stanford NLP Chinese(中文)的使用

Stanford NLP tools提供了处理中文的三个工具,分别是分词、Parser;具体参考:

http://nlp.stanford.edu/software/parser-faq.shtml#o

 

1.分词 Chinese segmenter

下载:http://nlp.stanford.edu/software/

Stanford Chinese Word Segmenter A Java implementation of a CRF-based Chinese Word Segmenter

这个包比较大,运行时候需要的内存也多,因而如果用eclipse运行的时候需要修改虚拟内存空间大小:

运行-》自变量-》VM自变量-》-Xmx800m (最大内存空间800m)

demo代码(修改过的,未检验):

Properties props = new Properties();
props.setProperty("sighanCorporaDict", "data");
// props.setProperty("NormalizationTable", "data/norm.simp.utf8");
// props.setProperty("normTableEncoding", "UTF-8");
// below is needed because CTBSegDocumentIteratorFactory accesses it
props.setProperty("serDictionary","data/dict-chris6.ser.gz");
//props.setProperty("testFile", args[0]);
props.setProperty("inputEncoding", "UTF-8");
props.setProperty("sighanPostProcessing", "true");

CRFClassifier classifier = new CRFClassifier(props);
classifier.loadClassifierNoExceptions("data/ctb.gz", props);
// flags must be re-set after data is loaded
classifier.flags.setProperties(props);
//classifier.writeAnswers(classifier.test(args[0]));
//classifier.testAndWriteAnswers(args[0]);

String result = classifier.testString("我是中国人!");
System.out.println(result);

2. Stanford Parser

可以参考http://nlp.stanford.edu/software/parser-faq.shtml#o

http://blog.csdn.net/leeharry/archive/2008/03/06/2153583.aspx

根据输入的训练库不同,可以处理英文,也可以处理中文。输入是分词好的句子,输出词性、句子的语法树(依赖关系)

英文demo(下载的压缩文件中有):

LexicalizedParser lp = new LexicalizedParser("englishPCFG.ser.gz");
lp.setOptionFlags(new String[]{"-maxLength", "80", "-retainTmpSubcategories"});

String[] sent = { "This", "is", "an", "easy", "sentence", "." };
Tree parse = (Tree) lp.apply(Arrays.asList(sent));
parse.pennPrint();
System.out.println();

TreebankLanguagePack tlp = new PennTreebankLanguagePack();
GrammaticalStructureFactory gsf = tlp.grammaticalStructureFactory();
GrammaticalStructure gs = gsf.newGrammaticalStructure(parse);
Collection tdl = gs.typedDependenciesCollapsed();
System.out.println(tdl);
System.out.println();

TreePrint tp = new TreePrint("penn,typedDependenciesCollapsed");
tp.printTree(parse);
中文有些不同:

//LexicalizedParser lp = new LexicalizedParser("englishPCFG.ser.gz");
LexicalizedParser lp = new LexicalizedParser("xinhuaFactored.ser.gz");
//lp.setOptionFlags(new String[]{"-maxLength", "80", "-retainTmpSubcategories"});

//    String[] sent = { "This", "is", "an", "easy", "sentence", "." };
String[] sent = { "他", "和", "我", "在",  "学校", "里", "常", "打", "桌球", "。" };
String sentence = "他和我在学校里常打台球。";
Tree parse = (Tree) lp.apply(Arrays.asList(sent));
//Tree parse = (Tree) lp.apply(sentence);

parse.pennPrint();

System.out.println();
/*
TreebankLanguagePack tlp = new PennTreebankLanguagePack();
GrammaticalStructureFactory gsf = tlp.grammaticalStructureFactory();
GrammaticalStructure gs = gsf.newGrammaticalStructure(parse);
Collection tdl = gs.typedDependenciesCollapsed();
System.out.println(tdl);
System.out.println();
*/
//only for English
//TreePrint tp = new TreePrint("penn,typedDependenciesCollapsed");
//chinese
TreePrint tp = new TreePrint("wordsAndTags,penn,typedDependenciesCollapsed",new ChineseTreebankLanguagePack());
tp.printTree(parse);
然而有些时候我们不是光只要打印出来的语法依赖关系,而是希望得到关于语法树(图),则需要采用如下的程序:

String[] sent = { "他", "和", "我", "在",  "学校", "里", "常", "打", "桌球", "。" };
ParserSentence ps = new ParserSentence();
Tree parse = ps.parserSentence(sent);
parse.pennPrint();
TreebankLanguagePack tlp = new ChineseTreebankLanguagePack();
GrammaticalStructureFactory gsf = tlp.grammaticalStructureFactory();
GrammaticalStructure gs = gsf.newGrammaticalStructure(parse);
Collection tdl = gs.typedDependenciesCollapsed();
System.out.println(tdl);
System.out.println();
for(int i = 0;i < tdl.size();i ++)
{
//TypedDependency(GrammaticalRelation reln, TreeGraphNode gov, TreeGraphNode dep)
TypedDependency td = (TypedDependency)tdl.toArray()[i];
System.out.println(td.toString());
}
//采用GrammaticalStructure的方法 getGrammaticalRelation(TreeGraphNode gov, TreeGraphNode dep)可以获得两个词的语法依赖关系





  • 5
    点赞
  • 18
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值