一、IKAnalyzer
1、把IKAnalyzer4.0.jar,IKAnalyzer.cfg,stopword.dic放到solr目录下的lib中
2、schema.xml文件中添加
<!-- IKAnalyzer -->
<fieldType name="text_ik" class="solr.TextField" >
<analyzer class="org.wltea.analyzer.lucene.IKAnalyzer"/>
<!-- <analyzer type="index">
<tokenizer class="org.wltea.analyzer.solr.IKTokenizerFactory" isMaxWordLength="false"/>
<filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt"/>
<filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="1" catenateNumbers="1" catenateAll="0" splitOnCaseChange="1"/>
<filter class="solr.LowerCaseFilterFactory"/>
<filter class="solr.EnglishPossessiveFilterFactory" protected="protwords.txt"/>
<filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
</analyzer>
<analyzer type="query">
<tokenizer class="org.wltea.analyzer.solr.IKTokenizerFactory" isMaxWordLength="false"/>
<filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt"/>
<filter class="solr.WordDelimiterFilterFactory" generateWordParts="1" generateNumberParts="1" catenateWords="1" catenateNumbers="1" catenateAll="0" splitOnCaseChange="1"/>
<filter class="solr.LowerCaseFilterFactory"/>
<filter class="solr.EnglishPossessiveFilterFactory" protected="protwords.txt"/>
<filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
</analyzer> -->
</fieldType>
solr配置字段
<dynamicField name="*_ik_s" type="text_ik" indexed="true" stored="true"/>
二、mmseg4j
与solr4结合有问题,实验中只有一个doc进行了分词,查文档需改源码,暂时放弃使用
1、将mmseg4j-all-1.9.0.jar拷贝到solr目录下的lib目录,注:1.9.0版本为与solr4.0适应版本,但依然存在兼容性问题,可从wiki的issue中找到解决方法及jar包
2、在$SOLR_HOME下建立dic目录,将data里的.dic文件拷贝到dic目录
3、solrconfig.xml文件中添加
<fieldType name="text_mmseg" class="solr.TextField" positionIncrementGap="100">
<analyzer>
<tokenizer class="com.chenlb.mmseg4j.solr.MMSegTokenizerFactory" mode="complex" dicPath="dic" />
<filter class="solr.LowerCaseFilterFactory"/>
</analyzer>
</fieldType>