用于solr5的ansj分词插件扩展


源码:
https://github.com/NLPchina/ansj_seg

jar包:
http://maven.nlpcn.org/org/ansj/
http://maven.nlpcn.org/org/nlpcn/nlp-lang
http://maven.nlpcn.org/org/ansj/tree_split/

生成solr5的ansj插件:
下载ansj_seg最新源码,在ansj_seg的lucene5的插件项目(ansj_seg/plugin/ansj_lucene5_plug)中做扩展,
添加类org.ansj.solr5.AnsjTokenizerFactory,编译后生成一个新的ansj_lucene5_plug-3.x.x.jar.

package org.ansj.solr5;
 
import java.io.BufferedReader;
import java.io.File;
import java.io.FileInputStream;
import java.io.FileNotFoundException;
import java.io.IOException;
import java.io.InputStreamReader;
import java.util.HashSet;
import java.util.Map;
import java.util.Set;


import org.ansj.lucene.util.AnsjTokenizer;
import org.ansj.splitWord.analysis.IndexAnalysis;
import org.ansj.splitWord.analysis.ToAnalysis;
import org.apache.lucene.analysis.Tokenizer;
import org.apache.lucene.analysis.util.TokenizerFactory;
import org.apache.lucene.util.AttributeFactory;
 
public class AnsjTokenizerFactory extends TokenizerFactory{


    boolean pstemming;
    boolean isQuery;
    private String stopwordsDir;
    public Set<String> filter; 
 
    public AnsjTokenizerFactory(Map<String, String> args) {
        super(args);
        getLuceneMatchVersion();
        isQuery = getBoolean(args, "isQuery", true);
        pstemming = getBoolean(args, "pstemming", false);
        stopwordsDir = get(args,"stopwords");
        addStopwords(stopwordsDir);
    }
    
    //add stopwords list to filter
    private void addStopwords(String dir) {
        if (dir == null){
            System.out.println("no stopwords dir");
            return;
        }
        //read stoplist
        System.out.println("stopwords: " + dir);
        filter = new HashSet<String>();
        File file = new File(dir);
        InputStreamReader reader;
        try {
            reader = new InputStreamReader(new FileInputStream(file),"UTF-8");
            BufferedReader br = new BufferedReader(reader);
            String word = br.readLine(); 
            while (word != null) {
                filter.add(word);
                word = br.readLine();
            } 
            br.close();
        } catch (FileNotFoundException e) {
            System.out.println("No stopword file found");
        } catch (IOException e) {
            System.out.println("stopword file io exception");
        }     
    }
    
    @Override
    public Tokenizer create(AttributeFactory factory) {
        if(isQuery == true){
            //query
            return new AnsjTokenizer(new ToAnalysis(), filter);
        } else {
            //index
            return new AnsjTokenizer(new IndexAnalysis(), filter);
        }
    }     
}


这里使用solr5自带的jetty进行部署,将分词插件及依赖的jar包放到/opt/solr/server/solr-webapp/webapp/WEB-INF/lib目录下:
ansj_lucene5_plug-3.7.3.jar
ansj_seg-3.7.3.jar
nlp-lang-1.5.jar

分词配置文件(library.properties)放到/opt/solr/server/resources目录下。

在schema中配置扩展分词的fieldType:
    <fieldType name="text_ansj" class="solr.TextField" positionIncrementGap="100">
      <analyzer type="index">
         <tokenizer class="org.ansj.solr5.AnsjTokenizerFactory"  isQuery="false" stopwords="/path/to/stopwords.dic"/>
      </analyzer>
      <analyzer type="query">
        <tokenizer class="org.ansj.solr5.AnsjTokenizerFactory" stopwords="/path/to/stopwords.dic"/>
      </analyzer>

    </fieldType>


  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值