java opennlp_使用opennlp进行文档分类

本文介绍了如何使用OpenNLP的DoccatModel进行文档分类。通过编写DocumentSample进行简单训练,展示了训练过程和categorize方法的使用,该方法根据概率返回最匹配的分类。文章提到,OpenNLP的categorize方法需要预先切词,但在pipeline设计中较为合适。
摘要由CSDN通过智能技术生成

本文主要研究下如何使用opennlp进行文档分类

DoccatModel

要对文档进行分类,需要一个最大熵模型(Maximum Entropy Model),在opennlp中对应DoccatModel

@Test

public void testSimpleTraining() throws IOException {

ObjectStream samples = ObjectStreamUtils.createObjectStream(

new DocumentSample("1", new String[]{"a", "b", "c"}),

new DocumentSample("1", new String[]{"a", "b", "c", "1", "2"}),

new DocumentSample("1", new String[]{"a", "b", "c", "3", "4"}),

new DocumentSample("0", new String[]{"x", "y", "z"}),

new DocumentSample("0", new String[]{"x", "y", "z", "5", "6"}),

new DocumentSample("0", new String[]{"x", "y", "z", "7", "8"}));

TrainingParameters params = new TrainingParameters();

params.put(TrainingParameters.ITERATIONS_PARAM, 100);

params.put(TrainingParameters.CUTOFF_PARAM, 0);

DoccatModel model = DocumentCategorizerME.train("x-unspecified", samples,

params, new DoccatFactory());

DocumentCategorizer doccat = new DocumentCategorizerME(model);

double[] aProbs = doccat.categorize(new String[]{"a"});

Assert.assertEquals("1", doccat.getBestCategory(aProbs));

double[] bProbs = doccat.categorize(new String[]{"x"});

Assert.assertEquals("0", doccat.getBestCategory(bProbs));

//test to make sure sorted map's last key is cat 1 because it has the highest score.

SortedMap> sortedScoreMap = doccat.sortedScoreMap(new String[]{"a"});

Set cat = sortedScoreMap.get(sortedScoreMap.lastKey());

Assert.assertEquals(1, cat.size());

}

这里为了方便测试,先手工编写DocumentSample来做训练文本

categorize方法返回的是一个概率,getBestCategory可以根据概率来返回最为匹配的分类

输出如下:

Indexing events with TwoPass using cutoff of 0

Computing event counts... done. 6 events

Indexing... done.

Sorting and merging events... done. Reduced 6 events to 6.

Done indexing in 0.13 s.

Incorporating indexed data for training...

done.

Number of Event Tokens: 6

Number of Outcomes: 2

Number of Predicates: 14

...done.

Computing model parameters ...

Performing 100 iterations.

1: ... loglikelihood=-4.1588830833596715 0.5

2: ... loglikelihood=-2.6351991759048894 1.0

3: ... loglikelihood=-1.9518912133474995 1.0

4: ... loglikelihood=-1.5599038834410852 1.0

5: ... loglikelihood=-1.3039748361952568 1.0

6: ... loglikelihood=-1.1229511041438864 1.0

7: ... loglikelihood=-0.9877356230661396 1.0

8: ... loglikelihood=-0.8826624290652341 1.0

9: ... loglikelihood=-0.7985244514476817 1.0

10: ... loglikelihood=-0.729543972551105 1.0

//...

95: ... loglikelihood=-0.0933856684859806 1.0

96: ... loglikelihood=-0.09245907503183291 1.0

97: ... loglikelihood=-0.09155090064000486 1.0

98: ... loglikelihood=-0.09066059844628399 1.0

99: ... loglikelihood=-0.08978764309881068 1.0

100: ... loglikelihood=-0.08893152970793908 1.0

小结

opennlp的categorize方法需要自己先切词好,单独调用不是很方便,不过如果是基于pipeline设计的,也可以理解,在pipeline前面先经过切词等操作。本文仅仅是使用官方的测试源码来做介绍,读者可以下载个中文分类文本训练集来训练,然后对中文文本进行分类。

doc

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值