Solr与开源中文分词(ansj)整合

1. ansj分词源码及jar包下载地址

源码:
https://github.com/NLPchina/ansj_seg

jar包:
http://maven.nlpcn.org/org/ansj/
http://maven.nlpcn.org/org/nlpcn/nlp-lang

2. 在solr使用ansj分词

(1) ansj的solr扩展及编译


ansj支持了lucene的扩展,使用下面几个jar包:
ansj_lucene4_plug-2.0.2.jar
ansj_seg-2.0.8-min.jar 
nlp-lang-0.3.jar

要在solr中使用ansj,可以在lucene插件的源码上做一下扩展:
插件的代码目录(maven工程)为ansj_seg/plug/ansj_lucene4_plug,导入此maven工程,配置好依赖关系,增加一个solr扩展类AnsjTokenizerFactory.
编译后生成一个新的ansj_lucene4_plug-2.0.2.jar包,重命名为ansj_lucene4_plug-2.0.2-solr.jar.

package org.ansj.solr;
 
import java.io.BufferedReader;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.InputStreamReader;
import java.io.Reader;
import java.util.HashSet;
import java.util.Map;
import java.util.Set;


import org.ansj.lucene.util.AnsjTokenizer;
import org.ansj.splitWord.analysis.IndexAnalysis;
import org.ansj.splitWord.analysis.ToAnalysis;
import org.apache.lucene.analysis.Tokenizer;
import org.apache.lucene.analysis.util.TokenizerFactory;
import org.apache.lucene.util.AttributeFactory;
 
public class AnsjTokenizerFactory extends TokenizerFactory{
    boolean pstemming;
    boolean isQuery;
    private String stopwordsDir;
    public Set<String> filter; 
 
    public AnsjTokenizerFactory(Map<String, String> args) {
        super(args);
        assureMatchVersion();
        isQuery = getBoolean(args, "isQuery", true);
        pstemming = getBoolean(args, "pstemming", false);
        stopwordsDir = get(args,"stopwords");
        addStopwords(stopwordsDir);
    }
    
    //add stopwords list to filter
    private void addStopwords(String dir) {
        if (dir == null){
            System.out.println("no stopwords dir");
            return;
        }
        //read stoplist
        System.out.println("stopwords: " + dir);
        filter = new HashSet<String>();
        File file = new File(dir);
        InputStreamReader reader;
        try {
            reader = new InputStreamReader(new FileInputStream(file),"UTF-8");
            BufferedReader br = new BufferedReader(reader);
            String word = br.readLine(); 
            while (word != null) {
                filter.add(word);
                word = br.readLine();
            } 
        } catch (FileNotFoundException e) {
            System.out.println("No stopword file found");
        } catch (IOException e) {
            System.out.println("stopword file io exception");
        }
    }
    
    @Override
    public Tokenizer create(AttributeFactory factory, Reader input) {
        if(isQuery == true){
            //query
            return new AnsjTokenizer(new ToAnalysis(new BufferedReader(input)), input, filter, pstemming);
        } else {
            //index
            return new AnsjTokenizer(new IndexAnalysis(new BufferedReader(input)), input, filter, pstemming);
        }
    }
}


(2) 在tomcat+solr中使用ansj分词扩展并进行配置

将下列jar包放在${tomcat}/webapps/solr/WEB-INF/lib/目录下:
ansj_lucene4_plug-2.0.2-solr.jar
ansj_seg-2.0.8-min.jar
nlp-lang-0.3.jar

solr数据集的schema.xml中增加text_ansj分词配置:
    <fieldType name="text_ansj" class="solr.TextField" positionIncrementGap="100">
      <analyzer type="index">
         <tokenizer class="org.ansj.solr.AnsjTokenizerFactory"  isQuery="false" stopwords="/xxx/tomcat/apache-tomcat-8.0.9/webapps/solr/WEB-INF/classes/stopwords.dic"/>
      </analyzer>
      <analyzer type="query">
        <tokenizer class="org.ansj.solr.AnsjTokenizerFactory" stopwords="/xxx/tomcat/apache-tomcat-8.0.9/webapps/solr/WEB-INF/classes/stopwords.dic"/>
      </analyzer>
    </fieldType>

(3) 使用ansj分词配置及自定义词典
将下列文件放在${tomcat}/webapps/solr/WEB-INF/classes目录下:
ansj_seg/library
ansj_seg/train_file

ansj_seg/library.properties  


注意library.properties中的配置路径,通常需要使用绝对路径,相对路径是相对于tomcat的启动目录而言的。


------

相对ik分词,ansj准确度更高。

IK分词在细粒度模式下有些错误的词被分出来,英文分词时会将单词切割开:如javascript可能会分成ja,java,javascript; nagios可能被切分成nagios,ios, 检索时会出现问题。


  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值